LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   General Discussion of Machine Learning (http://book.caltech.edu/bookforum/forumdisplay.php?f=105)
-   -   The VC dimension, complexity, and hypothesis set (http://book.caltech.edu/bookforum/showthread.php?t=4668)

lirongr 04-16-2016 10:00 AM

The VC dimension, complexity, and hypothesis set
Dear Professor Abu Mostafa,
We said that the larger the hypothesis set the lower the out of sample error would be. My question is how do we measure the size of the hypothesis set? Since in one of the lectures you said that the perceptron has an infinite large set of hypothesis (an infinite number of w's if I understand it correctly). Yet the perceptron is supposed to be a very simple model, so I would expect a large out of sample error.
So I may be confusing the number of the hypothesis we can generate for a given model with the its complexity, but how can we estimate the complexity? Is it by the VC dimension of the model? What is the relationship between the VC dimension, the complexity of the model and the number of hypothesis we can generate and how we can asses the complexity it the cases when VC dimension is not defined (ie regression).
Thank you very very much for your time and help,

All times are GMT -7. The time now is 02:02 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.