![]() |
Relation between feasibility of learning and Hoeffding's Inequality.
Hello.
I understand that learning is picking out a function from a candidate set of functions that most closely resembles the target function. The feasibility of learning would be related to how close this resemblance is. I understand also that Hoeffding's Inequality is an upper bound to the probability that the in-sample error rate deviates significantly from the real error rate. In the end, this upper bound simply implies that given a large enough sample, estimating the real error rate is feasible. Is there any misconception in anything here so far? So my question is: what does the Hoeffding inequality say about the feasibility of learning? Shouldn't it be feasibility of verifying hypothesis? |
Re: Relation between feasibility of learning and Hoeffding's Inequality.
Quote:
Your understanding is correct. The contrast between learning and verification that you allude to is precisely why the union bound was used in Lecture 2. The notion of the feasibility of learning will be further discussed next week in Lecture 4, where the question is split into two parts. Stayed tuned! |
Re: Relation between feasibility of learning and Hoeffding's Inequality.
Quote:
![]() ![]() ![]() ![]() ![]() |
Re: Relation between feasibility of learning and Hoeffding's Inequality.
Thanks for the lectures. I am a stats phd student at MD (new to the ideas of ML). A friend recommended your site. :D
My understanding of lecture 2 is that you are setting up a general framework to answer the question of "Is this model feasible?". In the -tossing 1000 coins 10 times analogy- each of the 1000 coins are the same. i.e. each of the possible h's in H are thought of as being the same in some sense, at least in the goal of finding a crude bound. The prob. distribution placed on the input space X affects the bin content and hence the sample content for any h in the Model, H. Question: In this 1st step framework: a small (overall) bound of say 0.001 implies a g/model is verified as learnable? i.e. any g/H is learnable is you if have a very large sample size and reasonable M? Any comments/corrections from anyone is appreciated. Thanks |
All times are GMT -7. The time now is 08:24 AM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.