View Single Post
  #2  
Old 08-01-2016, 11:05 AM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: Section 1.3 argument for feasibility of learning is fundamentally flawed

You are correct. It is possible to be in an adversarial ML setting (bad f w.r.t. your H) in which Eout is bad (independent of what Ein is).

What Hoeffding gives you for the feasibility of learning is that when you output your g because Ein is small:

Either your g is good or you generated an unreliable data set

(that is true as a tautology). What Hoeffding gives is that the probability of generating reliable data sets is high. What this means for the learning scenario in this thread is that most of the time Ein will not be low and you will say you failed. Very rarely you will think you did not fail when in fact you failed.

In general, with high probability you will:

Either say you failed or produce a good g.
__________________
Have faith in probability
Reply With Quote