LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Homework 2 (http://book.caltech.edu/bookforum/forumdisplay.php?f=131)
-   -   HW 2 Problem 6 (http://book.caltech.edu/bookforum/showthread.php?t=896)

 dbaksi@gmail.com 07-21-2012 03:55 PM

HW 2 Problem 6

How is this different from problem 5 other than N=1000 and the fact that these simulated 'out of sample' points (E_out) are generated fresh ? I may be missing something but it seems to boil down to running the same program as in problem 5 with N=1000 for 1000 times; can someone clarify please ? thanks

 yaser 07-21-2012 09:00 PM

Re: HW 2 Problem 6

Quote:
 Originally Posted by dbaksi@gmail.com (Post 3575) How is this different from problem 5 other than N=1000 and the fact that these simulated 'out of sample' points (E_out) are generated fresh ? I may be missing something but it seems to boil down to running the same program as in problem 5 with N=1000 for 1000 times; can someone clarify please ? thanks
There are indeed instances in the homeworks where the same experiment covers a number of homework problems.

Problem 5 asks about while Problem 6 asks about (an estimate of) . In both problems, ( stands for the number of training examples in our notation).

 MLearning 07-22-2012 12:58 AM

Re: HW 2 Problem 6

It is my understanding that "fresh data" refers to cross-validation data. Do we then compute Eout using the weights obtained in problem 5? When I do this, Eout < Ein. When I design the weights using the fresh data, Eout is approximately equal to Ein. Does this makes sense?

 yaser 07-22-2012 01:06 AM

Re: HW 2 Problem 6

Quote:
 Originally Posted by MLearning (Post 3581) It is my understanding that "fresh data" refers to cross-validation data. Do we then compute Eout using the weights obtained in problem 5?
It is simpler than cross validation (a topic that will be covered in detail in a later lecture). You just generate new data points that were not involved in training and evaluate the final hypothesis on those points.

The final hypothesis is indeed the one whose weights were determined in Problem 5, where the training took place.

 dsvav 07-22-2012 03:49 AM

Re: HW 2 Problem 6

I am confused here , I don't understand what is final hypothesis here.

There are 1000 target function and corresponding 1000 weight vectors/hypothesis in problem 5 .

So for problem 6 , 1000 times I generate 1000 out-of-sample data and then for each weight vector and target function(from problem 5) I evaluate E_out for that out-of-sample data and finally average them. This is how I have done.

I don't see final hypothesis here , what I am missing , any hint

Could it be that in problem 5 there is supposed to be only one target function and many in-sample data ? If so then the final hypothesis/weights could be that produces minimum in-sample error E_in .

Thanks a lot.

 yaser 07-22-2012 04:00 AM

Re: HW 2 Problem 6

Quote:
 Originally Posted by dsvav (Post 3584) I am confused here , I don't understand what is final hypothesis here. There are 1000 target function and corresponding 1000 weight vectors/hypothesis in problem 5 . So for problem 6 , 1000 times I generate 1000 out-of-sample data and then for each weight vector and target function(from problem 5) I evaluate E_out for that out-of-sample data and finally average them. This is how I have done. I don't see final hypothesis here , what I am missing , any hint Could it be that in problem 5 there is supposed to be only one target function and many in-sample data ? If so then the final hypothesis/weights could be that produces minimum in-sample error E_in . Please clarify. Thanks a lot.
There is a final hypothesis for each of the 1000 runs. The only reason we are repeating the runs is to average out statistical fluctuations, but all the notions of the learning problem, including the final hypothesis, pertain to a single run.

 dbaksi@gmail.com 07-22-2012 05:55 AM

Re: HW 2 Problem 6

Thanks a lot. The statements about (i) N being the number of 'in-sample' training data in both problems and (ii) the freshly generated 1000 points being disjoint from the first set clarified the confusion I had.

 dsvav 07-22-2012 06:14 AM

Re: HW 2 Problem 6

Thanks Professor yaser.

 rakhlin 07-23-2012 10:58 AM

Re: HW 2 Problem 6

When I generate new data and hypothesis for every single run of 1000 (as the problem suggests) I get stable out-of-sample result close to (slightly greater than) in-sample error.
When I estimate 1000 different out-of-samples for one in-sample and single hypothesis I get very different average error rates with high variability from 0.01 to 0.13 Why so?

 yaser 07-23-2012 12:48 PM

Re: HW 2 Problem 6

Quote:
 Originally Posted by rakhlin (Post 3612) When I generate new data and hypothesis for every single run of 1000 (as the problem suggests) I get stable out-of-sample result close to (slightly greater than) in-sample error. When I estimate 1000 different out-of-samples for one in-sample and single hypothesis I get very different average error rates with high variability from 0.01 to 0.13 Why so?
Just to clarify. You used the in-sample points to train and arrived at a final set of weights (corresponding to the final hypothesis). Each out of-sample point is now tested on this hypothesis and compared to the target value on the same point. Now, what exactly do you do to get the two scenarios you are describing?

All times are GMT -7. The time now is 11:26 AM.