View Single Post
Old 08-01-2016, 04:02 PM
magdon's Avatar
magdon magdon is offline
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 597
Default Re: Section 1.3 argument for feasibility of learning is fundamentally flawed

JJ: That is, it would seem that if for a given learning problem we have N\epsilon^2 such that \delta is sufficiently small, and if the learning algorithm run on this problem claims that its hypothesis is a good approximation to the target, then we should accept this claim.
Perhaps I am missing some subtlety but this is exactly what is being said in Section 1.3.

MMI: Either your g is good or you generated an unreliable data set
Hoeffding says that the probability of an unreliable data set is (say) 10^{-15} and so you can "safely" assume the data set was reliable and trust what Ein says.

JJ: I believe that I can, in fact, show that there is a better theory that encompasses both traditional Bayesian decision theory and the defense of learning presented above.
That would be very interesting, though perhaps a little beyond the scope of this book. Our approach is to view the feasibility learning in 2-steps:

1. Ensure you are generating a reliable dataset with HIGH probability (possible given Hoeffding).
2. Proceed as though the data set is reliable, which is not guaranteed, but a reasonable assumption given 1.
Have faith in probability
Reply With Quote