LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 4 - Overfitting (http://book.caltech.edu/bookforum/forumdisplay.php?f=111)

 Sweater Monkey 10-23-2013 12:15 PM

Exercise 4.7

I feel like I'm overthinking Exercise 4.7 (b) and I am hoping for a little bit of insight.

My gut instinct says that

I arrived at this idea by considering that the probability is similar to the standard deviation which is the square root of the variance so since:
and does ???

Then for part (c) on the exercise, assuming that the above is true, I used the notion that because if the probability of error were greater than 0.5 then the learned g would just flip its classification. Therefore this shows that for any in a classification problem,
and therefore:

Any indication as to whether I'm working along the correct lines would be appreciated!

 magdon 10-25-2013 08:16 AM

Re: Exercise 4.7

Quote:
 Originally Posted by Sweater Monkey (Post 11588) I feel like I'm overthinking Exercise 4.7 (b) and I am hoping for a little bit of insight. My gut instinct says that
can be obtained from part (a) by computing , which is the variance (over x) of the error that the hypothesis makes. For the specific error measure, the error is bounded between [0,1], so you can bound this variance.

 ntvy95 04-06-2016 03:51 AM

Re: Exercise 4.7

Hello, I'm currently stuck at (d). I have derived to this point:

and I'm currently stuck. The hint says that the squared error is unbounded hence I guess that there should be no bound for expected value of squared error? :clueless: (I'm not good at math though...)

 ntvy95 07-19-2016 09:58 AM

Re: Exercise 4.7

I'm not sure if I can re-interpret the Figure 4.8 like this: If you train your data with one horrible hypothesis you will get a very bad generalization bound despite the number of data points is large? :confused:

 All times are GMT -7. The time now is 11:29 AM.