![]() |
Sampling bias and class imbalance for target variable
To avoid sampling bias, the general idea is to have the training distribution to match the testing distribution (as stated in the book). Is this the same as having the sample (train + validation + test) to match the population distribution?
How does this relates to the class imbalance of the target (y) distribution. For instance, training a machine to identify fraud where the number of fraud transactions are much lower than the non-fraud transactions. Is it favourable for the training to upweight the number of fraud transactions in your training data in order to have a balanced data set wrt. y? How does this relates to sampling bias and how do you adjust for this upsampling of fraud cases for the model to generalize well? |
Re: Sampling bias and class imbalance for target variable
Thanks for your feedback.
Earlier in the lectures we learned about penalizing losses differently by using a loss matrix. Is this one instance where this technique can be useful, by penalizing the case "classifier predicts false when the target is true (fraud)" more severe than the other error type (for a binary target)? |
All times are GMT -7. The time now is 03:58 PM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.