#13




Re: Problem 1.10 : Expected Off Training Error
Hoeffding applies to a single hypothesis . To take a concrete case of the setting in the problem, suppose that is obtained by flipping a fair coin for every data point to obtain . The problem proves that your expected off training set error is 0.5; no surprise, since is random
Hoeffding tells you that your training error (if N is large enough) will not deviate far from Eout with very high probability. This means that with very high probability, Ein will be close to 0.5. Problem 1.10 says that if you are in such a pathalogical learning situation with a random , then no matter what you do insample, your outofsample performance will be very bad. (This is sometimes called nofreelunch, because you cannot expect to succeed unless you assume that f is not completely random). Hoeffding (and VC) says that provided your hypothesis set is not too large compared to N, you will know that you are in a bad situation because your Ein will reflect Eout and be close to 0.5. The nature of your with respect to will determine how good your Eout; if is very bad, then Eout will be close to 0.5. Hoeffding role is to ensure that Ein will tell you that you are in this tough situation. To summarize: problem 1.10 identifies one case in which you are in trouble and Eout will be 0.5. Hoeffeing tells you when you will be able to know that you are in trouble. Quote:
__________________
Have faith in probability 
Tags 
hoeffding's inequality, hoeffdinginequality 
Thread Tools  
Display Modes  

