View Single Post
Old 02-20-2013, 12:57 PM
yaser's Avatar
yaser yaser is offline
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: Interaction of cross validation with model selection

Originally Posted by hemphill View Post
With 10-fold cross validation, we do 10 training runs. How would you recommend we do model selection? If we have M models, do we do 10*M training runs? Or do we do model selection with simple validation, then use cross validation for an error estimate? There seem to be a lot of possibilities. If we wish to use validation to choose an "early stopping" parameter, we could estimate the value of this parameter in each of the cross validation runs, then use the average when training with the full data set. Is this OK?
10-fold cross validation for selecting a model among M models will indeed take 10*M training sessions. If you want to choose the value of a parameter, you save all ten cross-validation errors for the different values of the parameter, average these errors for each parameter value, then pick the parameter value that has the smallest average error.
Where everyone thinks alike, no one thinks very much
Reply With Quote