LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 5 - Three Learning Principles (http://book.caltech.edu/bookforum/forumdisplay.php?f=112)
-   -   Sampling bias and class imbalance for target variable (http://book.caltech.edu/bookforum/showthread.php?t=948)

 rainbow 08-05-2012 04:01 AM

Sampling bias and class imbalance for target variable

To avoid sampling bias, the general idea is to have the training distribution to match the testing distribution (as stated in the book). Is this the same as having the sample (train + validation + test) to match the population distribution?

How does this relates to the class imbalance of the target (y) distribution. For instance, training a machine to identify fraud where the number of fraud transactions are much lower than the non-fraud transactions. Is it favourable for the training to upweight the number of fraud transactions in your training data in order to have a balanced data set wrt. y? How does this relates to sampling bias and how do you adjust for this upsampling of fraud cases for the model to generalize well?

 magdon 08-09-2012 06:20 AM

Re: Sampling bias and class imbalance for target variable

You raise an interesting point regarding unbalanced data, which is often the nature of the data in many "high risk" applications. In learning from data it is useful to distinguish between two distinct goals:

1) Obtaining the best possible classifier;

2) Evaluating the out-of-sample performance of your classifier.

The reason it is easy to confuse these two goals together is because most ways of approaching 1) is to solve 2) first and then optimize your estimate of out-of-sample performance over the hypotheses in . Using this typical approach, it is quite easy for many learning algorithms to largely ignore the minority class in a severely unbalanced data.

Hence, in an effort to obtain the best possible classifier that pays some attention to the minority class one might artificially reweight the data to emphasize the minority class more so that its properties can be learned. Nevertheless, to evaluate your out-of-sample performance, you should go back to the unweighted, unbalanced data that represents the population.

Quote:
 Originally Posted by rainbow (Post 3808) To avoid sampling bias, the general idea is to have the training distribution to match the testing distribution (as stated in the book). Is this the same as having the sample (train + validation + test) to match the population distribution? How does this relates to the class imbalance of the target (y) distribution. For instance, training a machine to identify fraud where the number of fraud transactions are much lower than the non-fraud transactions. Is it favourable for the training to upweight the number of fraud transactions in your training data in order to have a balanced data set wrt. y? How does this relates to sampling bias and how do you adjust for this upsampling of fraud cases for the model to generalize well?

 rainbow 08-09-2012 12:32 PM

Re: Sampling bias and class imbalance for target variable

Thanks for your feedback.

Earlier in the lectures we learned about penalizing losses differently by using a loss matrix. Is this one instance where this technique can be useful, by penalizing the case "classifier predicts false when the target is true (fraud)" more severe than the other error type (for a binary target)?

 All times are GMT -7. The time now is 07:37 PM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.