View Single Post
Old 09-15-2012, 07:03 AM
magdon's Avatar
magdon magdon is offline
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 597
Default Re: Selecting "representative" test data

You really have no option but to select randomly for the test and training data. The problem that the test set may not be representative is not a problem with the selection of data but with the size of the test set. In such case your statement that the test data may not be representative (due to statistical fluctuations) means that you could not trust the result on it any way (even if it happened to contain the right amount of each class).

A better option for you is to move into the cross validation framework which even allows you to use a "test set" of size 1. (See Chapter 4 for more details).

Originally Posted by Andrs View Post
I had a similar question posted in another forum but may be it belongs to this general forum.
I have data with multiple classes and I want to to divide the data in a "big chunck for training" and a "smaller chunk for testing". The original data has a certain distribution for the different classes (i.e. x% for class 1, y% for class 2, z% for class 3).
How should I select the "test data" (and "training set") using this multiple class data input? The basic assumption is that there is enough data to start with! If I use a pure random selection to create the two sets, the "test set" may not contain all the classes and it may not be representative (test set is much smaller than the training set). Another alternative is to find the classes distribution in the data and to assure that the "test data" contains approx the same distribution. Here I am really looking into the data and there is a risk for snooping. Of course, I may not use this class distribution information in the training process, but...
Is it a relevant question or is there a misunderstanding from my side? I would like to discuss this issue, what are the risks here, what are the best experiences?
Have faith in probability
Reply With Quote