LFD Book Forum  

Go Back   LFD Book Forum > Book Feedback - Learning From Data > Chapter 1 - The Learning Problem

 
 
Thread Tools Display Modes
Prev Previous Post   Next Post Next
  #10  
Old 04-13-2013, 01:40 AM
grozhd grozhd is offline
Junior Member
 
Join Date: Apr 2013
Posts: 4
Default Re: Is the Hoeffding Inequality really valid for each bin despite non-random sampling

Quote:
Originally Posted by yaser View Post
It is a subtle point, so let me try to explain it in the terms you outlined. Let us take the sample {\cal D} (what you call \bar{x_0}, just to follow the book notation). Now evaluate \nu for all hypotheses h in your model {\cal H}. We didn't start at one h and moved to another. We just evaluated \nu for all h\in{\cal H}. The question is, does Hoeffding inequality apply to each of these h's by itself? The answer is clearly yes since each of them could be in principle the hypothesis you started with (which you called h_1).

Hoeffding states what the probabilities are before the sample is drawn. When you choose one of these hypotheses because of its small \nu, as in the scenario you point out, the probability that applies now is conditioned on the sample having small \nu. We can try to get conditional version of Hoeffding to deal with the situation, or we can try to get a version of Hoeffding that applies regardless of which h we choose and how we choose it. The latter is what we did using the union bound.

Finally, taking the example you illustrated, any hypothesis you use has to be in {\cal H} (which is decided before the sample is drawn). The one you constructed is not guaranteed to be in {\cal H}. Of course you can guarantee that it is in {\cal H} by taking {\cal H} to be the set of all possible hypotheses, but in this case, M is thoroughly infinite and the multiple-bin Hoeffding does not guarantee anything at all.
Thank you for your reply. So, we can think about learning as follows: we have drawn a random sample from the bin and then evaluated all the \nu_i on it – in this interpretation it is absolutely clear that it is truly random. And then we are choosing the best one and the logic that justifies that is Hoeffding's inequality. The process of PLA, for example, just allows us to search for this "best hypothesis" because hypothesis set is infinite and we can't really do the theoretical process described above.
Reply With Quote
 

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 01:48 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.