LFD Book Forum  

Go Back   LFD Book Forum > Book Feedback - Learning From Data > Chapter 5 - Three Learning Principles

Reply
 
Thread Tools Display Modes
  #1  
Old 08-05-2012, 03:01 AM
rainbow rainbow is offline
Member
 
Join Date: Jul 2012
Posts: 41
Default Sampling bias and class imbalance for target variable

To avoid sampling bias, the general idea is to have the training distribution to match the testing distribution (as stated in the book). Is this the same as having the sample (train + validation + test) to match the population distribution?

How does this relates to the class imbalance of the target (y) distribution. For instance, training a machine to identify fraud where the number of fraud transactions are much lower than the non-fraud transactions. Is it favourable for the training to upweight the number of fraud transactions in your training data in order to have a balanced data set wrt. y? How does this relates to sampling bias and how do you adjust for this upsampling of fraud cases for the model to generalize well?
Reply With Quote
  #2  
Old 08-09-2012, 05:20 AM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 592
Default Re: Sampling bias and class imbalance for target variable

You raise an interesting point regarding unbalanced data, which is often the nature of the data in many "high risk" applications. In learning from data it is useful to distinguish between two distinct goals:

1) Obtaining the best possible classifier;

2) Evaluating the out-of-sample performance of your classifier.

The reason it is easy to confuse these two goals together is because most ways of approaching 1) is to solve 2) first and then optimize your estimate of out-of-sample performance over the hypotheses in \cal H. Using this typical approach, it is quite easy for many learning algorithms to largely ignore the minority class in a severely unbalanced data.

Hence, in an effort to obtain the best possible classifier that pays some attention to the minority class one might artificially reweight the data to emphasize the minority class more so that its properties can be learned. Nevertheless, to evaluate your out-of-sample performance, you should go back to the unweighted, unbalanced data that represents the population.

Quote:
Originally Posted by rainbow View Post
To avoid sampling bias, the general idea is to have the training distribution to match the testing distribution (as stated in the book). Is this the same as having the sample (train + validation + test) to match the population distribution?

How does this relates to the class imbalance of the target (y) distribution. For instance, training a machine to identify fraud where the number of fraud transactions are much lower than the non-fraud transactions. Is it favourable for the training to upweight the number of fraud transactions in your training data in order to have a balanced data set wrt. y? How does this relates to sampling bias and how do you adjust for this upsampling of fraud cases for the model to generalize well?
__________________
Have faith in probability
Reply With Quote
  #3  
Old 08-09-2012, 11:32 AM
rainbow rainbow is offline
Member
 
Join Date: Jul 2012
Posts: 41
Default Re: Sampling bias and class imbalance for target variable

Thanks for your feedback.

Earlier in the lectures we learned about penalizing losses differently by using a loss matrix. Is this one instance where this technique can be useful, by penalizing the case "classifier predicts false when the target is true (fraud)" more severe than the other error type (for a binary target)?
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 02:14 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.