![]() |
#1
|
|||
|
|||
![]()
Hi,
If I got it right, in a noiseless setting, for a fixed D, if all f are equally likely, the expected off-training-error of any hypothesis h is 0.5 (part d of problem 1.10, page 37) and hence any two algorithms are the same in terms of expected off training error (part e of the same problem). My question is, does this not contradict the generalization by Hoeffding. Specifically, the following point is bothering me By Hoeffding : Ein approaches Eout for larger number of hypothesis (i.e for small epsilon) as N grows sufficiently large. Which would imply that expected(Eout) should be approximately the same as expected(Ein) and not a constant (0.5). Can you please provide some insight on this, perhaps my comparison is erroneous. Thanks. |
#2
|
||||
|
||||
![]() Quote:
On face value, the statement "all ![]() In terms of ![]() ![]() ![]() ![]() This is why learning was decomposed into two separate questions in this chapter. In terms of these two questions, the one that "fails" in the random function approach is " ![]() Let me finally comment that treating "all ![]() http://work.caltech.edu/library/182.html
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
My takeaway point : All f are equally likely corresponds to trying to learn a randomly generated target function
Thanks for the detailed explanation. The Bayesian Learning example highlights the ramifications of this assumption, very useful point indeed. |
![]() |
Thread Tools | |
Display Modes | |
|
|