LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 5 - Three Learning Principles (http://book.caltech.edu/bookforum/forumdisplay.php?f=112)
-   -   Is a test set needed after cross validation? (http://book.caltech.edu/bookforum/showthread.php?t=9563)

gverhoev 07-09-2021 04:33 PM

Is a test set needed after cross validation?
 
It is said that the CV error Ecv is an unbiased estimate of Eout (N-1), hence it is used for model selection.
However, many books say that after CV, one should have a third dataset (often called the test dataset) to truly measure the performance (i.e. Eout) of the chosen final hypothesis. However, if Ecv is already an unbiased estimate of Eout (N - 1), why would one then even need this third / test set to check how well the model does? Is there something that I miss here?
Is this maybe because the CV approach is technically data snooping (because we make a model choice influenced by the data, so the data has less ability to evaluate the final outcome), so it is still best to test the real performance on never seen data? Or are these books simply wrong?

htlin 07-10-2021 11:57 PM

Re: Is a test set needed after cross validation?
 
There are still several issues of Ecv. For instance, albeit (almost) unbiased, it can suffer from a huge variance. In that case, another set can be helpful in gauging the test performance.

https://www.csie.ntu.edu.tw/~htlin/p...pkdd05sage.pdf

Hope this helps.

gverhoev 07-13-2021 07:29 AM

Re: Is a test set needed after cross validation?
 
Dear professor Lin,

many thanks for your answer. However, is this not only a problem when N is small? It seems that with a large number of data points, every g- will be rather similar and close to g. In contrast, one would assume that a small number of data points (or huge outliers) would result in largely different g- hypotheses, hence a large variance.
* Is this reasoning correct?
* Would one still need a test dataset when there are many data points?
* is using the same data for training and validation in a leave-one-out CV not an example of data snooping?

Many thanks for your valuable insights!
Geert

htlin 07-15-2021 08:14 PM

Re: Is a test set needed after cross validation?
 
When N is large, the variance of the individual errs would be large, as you mention. The variance of E_loocv, however, could be knocked down by N, making E_loocv possibly robust enough.

The usual issue when N is large is computation---computing E_loocv can be time consuming with large N.

Hope this helps.

htlin 07-15-2021 08:19 PM

Re: Is a test set needed after cross validation?
 
Quote:

Originally Posted by gverhoev (Post 21393)
* is using the same data for training and validation in a leave-one-out CV not an example of data snooping?
Geert

In a general sense, maybe. But given that the harm is generally minor, relative to the benefits that comes with a careful validation procedure. So very few people consider this to be a data snooping case that we should be aware of. Hope this helps.


All times are GMT -7. The time now is 03:49 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.