LFD Book Forum Exercise 1.7
 User Name Remember Me? Password
 FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
04-20-2014, 06:00 PM
 netweavercn Junior Member Join Date: Jan 2014 Posts: 7
Exercise 1.7

A little confused about 1.7. what is the purpose of this exercise: learning is impossible?

in (a), both hypotheses have 1 agree on all 3 points and no agree on all other 7 cases. so which hypothesis should be chose by learning algorithm? as the 2 hypotheses are the same.

in(b), still same situation

in(c), there are 4 cases agrees.

(d), is it possible to agree all 8 cases?
#2
04-21-2014, 12:21 PM
 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 595
Re: Exercise 1.7

The purpose of the exercise is to show what can possibly happen outside the data, namely anything, and it is the same no matter how you pick your from the data.

Compare this with the bin in the next section. What can possibly remain in the bin after you pick your sample is every possible combination of red and green marbles.

So it is always *possible* to end up with a hypothesis that is arbitrarily bad no matter what algorithm you use to pick that hypothesis using only the data as a guide. That is what thiis exercise illustrates. In the next section you will learn that while anything is possible, some things are more likely than others.

Quote:
 Originally Posted by netweavercn A little confused about 1.7. what is the purpose of this exercise: learning is impossible? in (a), both hypotheses have 1 agree on all 3 points and no agree on all other 7 cases. so which hypothesis should be chose by learning algorithm? as the 2 hypotheses are the same. in(b), still same situation in(c), there are 4 cases agrees. (d), is it possible to agree all 8 cases?
You may have misunderstood the question. In (a) you pick the hypothesis that is always , since that is what agrees with the data the most (from the table on the same page). Compare this with the possible choices for given by : 1 agrees with on all three test points; 3 agree with on two test points; 3 agree with on one test points; 1 agrees with on none of the test points;
__________________
Have faith in probability
#3
04-22-2014, 05:03 PM
 netweavercn Junior Member Join Date: Jan 2014 Posts: 7
Re: Exercise 1.7

Thanks a lot, so for (a), whatever you choose which h (white or black), the result is the same.

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 10:27 PM.

 Contact Us - LFD Book - Top

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.