View Single Post
#2
02-22-2013, 05:36 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,477
Re: Hoeffding inequality for multiple classifiers

Quote:
 Originally Posted by cls2k I'm having some trouble understanding the case of applying Hoeffding to the case of multiple classifier (Bins). Shouldn't the final picked hypothesis g* still be bound by Hoeffding's inequality since its just like any other hypothesis in the set? How does the process of picking the hypothesis based on the data affect the Hoeffding's bound? What if I pick the worst hypothesis instead of the best one? shouldn't hoeffding's bound apply to that too?
The assumptions used to prove Hoeffding necessitate that not depend on the sample, which is violated when is the hypothesis since it was chosen according to the sample. Without this assumption, the proof doesn't go through.

Quote:
 While I understand the mathematics behind the union bound, it seems unintuitive that the bond on g* should be a union of all the bonds of h() in the set since the final g* does not have anything to do with the other unpicked hypothesis.
It is just a bound that can be asserted without making assumptions about what depends on what, so it is valid even if more careful analysis in a particular case yields a tighter bound.

Quote:
 I do understand the coin example. since the chance of getting 10 heads in a row for one coin is very low but its actually high if you repeat the experiment 1000 times. However I'm unsure as to how this relates to the learning scenario. Getting 10 heads on a sample would be equivelant to getting an Ein of 0. But its mentioned again and again that this is a "bad" event. How does this have anything to do with the Hoeffding bound?
Equate getting a head with making an error, and that explains the 'bad event' part. The relation to learning is that the sample suggests the coin is not fair when in fact it is, which means generalization is poor.
__________________
Where everyone thinks alike, no one thinks very much