Thanks for the lectures. I am a stats phd student at MD (new to the ideas of ML). A friend recommended your site.

My understanding of lecture 2 is that you are setting up a general framework to

answer the question of "Is this model feasible?".

In the -tossing 1000 coins 10 times analogy- each of the 1000 coins are the same.

i.e. each of the possible h's in H are thought of as being the same in some sense, at least in the goal of finding a crude bound.

The prob. distribution placed on the input space X affects the bin content and

hence the sample content for any h in the Model, H.

__Question:__ In this 1st step framework: a small (overall) bound of say 0.001 implies a g/model is verified as learnable? i.e. any g/H is learnable is you if have a very large sample size and reasonable M?

Any comments/corrections from anyone is appreciated. Thanks