Quote:
JJ: That is, it would seem that if for a given learning problem we have N\epsilon^2 such that \delta is sufficiently small, and if the learning algorithm run on this problem claims that its hypothesis is a good approximation to the target, then we should accept this claim.
|
Perhaps I am missing some subtlety but this is exactly what is being said in Section 1.3.
Quote:
MMI: Either your g is good or you generated an unreliable data set
|
Hoeffding says that the probability of an unreliable data set is (say)

and so you can "safely" assume the data set was reliable and trust what Ein says.
Quote:
JJ: I believe that I can, in fact, show that there is a better theory that encompasses both traditional Bayesian decision theory and the defense of learning presented above.
|
That would be very interesting, though perhaps a little beyond the scope of this book. Our approach is to view the feasibility learning in 2-steps:
1. Ensure you are generating a reliable dataset with HIGH probability (possible given Hoeffding).
2. Proceed as though the data set is reliable, which is not guaranteed, but a reasonable assumption given 1.