View Single Post
  #2  
Old 01-26-2013, 09:11 PM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: Question on VC dimension

If d_{vc}(\cal H)<\infty, then with enough data, Ein\approx Eout for every hypothesis in your learning model. This is why the final hypothesis will generalize, because it is one of the hypotheses in \cal H. The connection to learning comes when you pick the hypothesis with minimum Ein.

Because Ein\approx Eout, by looking at Ein and picking the function with minimum Ein, that hypothesis you pick will also have minimum Eout, and so you will be able to learn (figure out) the best hypothesis in your model using the data.

Well, that is the first step in learning - can you pick/figure out/learn the best hypothesis available to you? That does not mean that that hypothesis is a good hypothesis, but it is the first step.

The second step is to ask whether this hypothesis that was "learned" is good enough. You will know whether the hypothesis is good or not by looking at its Ein (since it is close to Eout). If you had chosen a good \cal H then the answer to this second step will be yes. If your \cal H is bad then the answer will be no, and you will declare that you failed, but you know you failed.

Quote:
Originally Posted by Suhas Patil View Post
I am trying to understand this concept: "If VC dimension is finite, the final hypothesis will generalize". But somehow not able to relate VC dimension and learning. Can someone help.
Thank you for your attention.
__________________
Have faith in probability
Reply With Quote