View Single Post
  #3  
Old 09-15-2012, 12:02 PM
Andrs Andrs is offline
Member
 
Join Date: Jul 2012
Posts: 47
Default Re: Selecting "representative" test data

Quote:
Originally Posted by magdon View Post
You really have no option but to select randomly for the test and training data. The problem that the test set may not be representative is not a problem with the selection of data but with the size of the test set. In such case your statement that the test data may not be representative (due to statistical fluctuations) means that you could not trust the result on it any way (even if it happened to contain the right amount of each class).

A better option for you is to move into the cross validation framework which even allows you to use a "test set" of size 1. (See Chapter 4 for more details).
Thanks Magdon!
I will be using cross validation as my basic approach. However, I was thinking also to put aside some data for testing. The reason is that I am new in the area and there is the risk that I will be overusing the CV data to define my hypothesis/parameters (implying too optimistic results). The test set would be my real proof of generalization (some limit for E_out that could increase my confidence in the results). Of course, we could discuss the value of this limited test data that is randomly selected as a upper limit for E_out compared to E_cv. May be E_cv is the best bet in the end.....if I do not overuse it
Hopefully my data(in sample) is (well) randomly selected and it is representative for my out_of sample population. I do not know the out of sample distribution and the best I could do is to use a random selection to select a "test sample" ( as you suggested). The question is if 10% of my in_sample will be enough for test data data or not.
Reply With Quote