LFD Book Forum  

Go Back   LFD Book Forum > General > General Discussion of Machine Learning

Reply
 
Thread Tools Display Modes
  #1  
Old 04-16-2016, 10:00 AM
lirongr lirongr is offline
Junior Member
 
Join Date: Apr 2016
Posts: 2
Default The VC dimension, complexity, and hypothesis set

Dear Professor Abu Mostafa,
We said that the larger the hypothesis set the lower the out of sample error would be. My question is how do we measure the size of the hypothesis set? Since in one of the lectures you said that the perceptron has an infinite large set of hypothesis (an infinite number of w's if I understand it correctly). Yet the perceptron is supposed to be a very simple model, so I would expect a large out of sample error.
So I may be confusing the number of the hypothesis we can generate for a given model with the its complexity, but how can we estimate the complexity? Is it by the VC dimension of the model? What is the relationship between the VC dimension, the complexity of the model and the number of hypothesis we can generate and how we can asses the complexity it the cases when VC dimension is not defined (ie regression).
Thank you very very much for your time and help,
Liron
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 12:34 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.