
#1




Sampling bias and class imbalance for target variable
To avoid sampling bias, the general idea is to have the training distribution to match the testing distribution (as stated in the book). Is this the same as having the sample (train + validation + test) to match the population distribution?
How does this relates to the class imbalance of the target (y) distribution. For instance, training a machine to identify fraud where the number of fraud transactions are much lower than the nonfraud transactions. Is it favourable for the training to upweight the number of fraud transactions in your training data in order to have a balanced data set wrt. y? How does this relates to sampling bias and how do you adjust for this upsampling of fraud cases for the model to generalize well? 
#2




Re: Sampling bias and class imbalance for target variable
You raise an interesting point regarding unbalanced data, which is often the nature of the data in many "high risk" applications. In learning from data it is useful to distinguish between two distinct goals:
1) Obtaining the best possible classifier; 2) Evaluating the outofsample performance of your classifier. The reason it is easy to confuse these two goals together is because most ways of approaching 1) is to solve 2) first and then optimize your estimate of outofsample performance over the hypotheses in . Using this typical approach, it is quite easy for many learning algorithms to largely ignore the minority class in a severely unbalanced data. Hence, in an effort to obtain the best possible classifier that pays some attention to the minority class one might artificially reweight the data to emphasize the minority class more so that its properties can be learned. Nevertheless, to evaluate your outofsample performance, you should go back to the unweighted, unbalanced data that represents the population. Quote:
__________________
Have faith in probability 
#3




Re: Sampling bias and class imbalance for target variable
Thanks for your feedback.
Earlier in the lectures we learned about penalizing losses differently by using a loss matrix. Is this one instance where this technique can be useful, by penalizing the case "classifier predicts false when the target is true (fraud)" more severe than the other error type (for a binary target)? 
Thread Tools  
Display Modes  

