LFD Book Forum Exercise 4.7
 User Name Remember Me? Password
 Register FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
10-23-2013, 12:15 PM
 Sweater Monkey Junior Member Join Date: Sep 2013 Posts: 6
Exercise 4.7

I feel like I'm overthinking Exercise 4.7 (b) and I am hoping for a little bit of insight.

My gut instinct says that

I arrived at this idea by considering that the probability is similar to the standard deviation which is the square root of the variance so since:
and does ???

Then for part (c) on the exercise, assuming that the above is true, I used the notion that because if the probability of error were greater than 0.5 then the learned g would just flip its classification. Therefore this shows that for any in a classification problem,
and therefore:

Any indication as to whether I'm working along the correct lines would be appreciated!
#2
10-25-2013, 08:16 AM
 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 595
Re: Exercise 4.7

Quote:
 Originally Posted by Sweater Monkey I feel like I'm overthinking Exercise 4.7 (b) and I am hoping for a little bit of insight. My gut instinct says that
can be obtained from part (a) by computing , which is the variance (over x) of the error that the hypothesis makes. For the specific error measure, the error is bounded between [0,1], so you can bound this variance.
__________________
Have faith in probability
#3
04-06-2016, 03:51 AM
 ntvy95 Member Join Date: Jan 2016 Posts: 37
Re: Exercise 4.7

Hello, I'm currently stuck at (d). I have derived to this point:

and I'm currently stuck. The hint says that the squared error is unbounded hence I guess that there should be no bound for expected value of squared error? (I'm not good at math though...)
#4
07-19-2016, 09:58 AM
 ntvy95 Member Join Date: Jan 2016 Posts: 37
Re: Exercise 4.7

I'm not sure if I can re-interpret the Figure 4.8 like this: If you train your data with one horrible hypothesis you will get a very bad generalization bound despite the number of data points is large?

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 05:05 PM.

 Contact Us - LFD Book - Top

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.