LFD Book Forum Am I understanding the analogy correctly?

#1
09-04-2018, 09:17 PM
 huyptruong Junior Member Join Date: Sep 2018 Posts: 1
Am I understanding the analogy correctly?

On page 20, the book tries to connect the bin model to the learning problem. Here's I understand:
1) Introduce a hypothesis h
2) We can now compare h(x) and f(x) for all x in X. Think of the bin is completely sealed so we don't see the colors, yet there are some red and some green data points.
3) Due to h, we are introducing some probability to the bin X.
4) Now grab N data points from the bin X and look at these data points. We know their colors now. If most of them are red, we know we need to fix our hypothesis h because it is extremely unlikely to get a lot of red if h is very close to f (asserted by the Hoeffding's inequality).

This makes a lot of sense to me. However, I also watch the video lecture 2 and it seems that my understanding is incorrect. It seems that the professor says the probability is introduced in order to generate the sample data points, not because of the hypothesis h. This is completely opposite to my understanding because I think that the probability comes from the fact that we introduce a hypothesis h.

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 07:50 PM.