View Single Post
Old 03-10-2013, 09:39 AM
htlin's Avatar
htlin htlin is offline
Join Date: Aug 2009
Location: Taipei, Taiwan
Posts: 610
Default Re: cross validation and feature selection

Originally Posted by palmipede View Post
Feature selection isn't exactly machine learning but could you please expand on the relationship between cross validation and feature extraction?

Some years ago, I took part in a genomic study that tried to identify genes that are up regulated in the presence of infection. These studies start with a few vectors (essays) with many components (genes) between 20k and 40k at the time and ask which components are meaningful features. The data was particularly noisy and after listening to the course I realize that what I did then really was a poor's man cross validation. I would do it much more cleanly now.

Do you recommend having different data sets for feature extraction, training and testing, or could feature extraction be part of cross-correlation during training?
Generally speaking, feature selection can be viewed as part of the learning algorithm, but you need to account for the model complexity that you spend during the selection process. If you follow this view, you can use CV or the validation tools that you know to estimate the performance of the combination of (feature selection + actual learning). For instance, in an earlier paper of mine,

we do try to evaluate the combination of (feature selection + actual learning) fairly with cross validation. Hope this helps.
When one teaches, two learn.
Reply With Quote