LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 4

Reply
 
Thread Tools Display Modes
  #1  
Old 01-30-2013, 11:36 AM
Anne Paulson Anne Paulson is offline
Senior Member
 
Join Date: Jan 2013
Location: Silicon Valley
Posts: 52
Question VC dimension independent of probability distribution

In Lecture 7, we learn, I think, that if we have a finite VC dimension, then whatever our error rate is on our chosen hypothesis g, that error rate will generalize to all of our input space X, subject to the bounds we know about. That is, with at least some probability that we can compute, the error rate on our training set will be close to the error rate on the whole input space.



And we further learn, I think, that this generalization is true independent of the probability distribution we used to choose our input set.

But now I'm confused. Are we assuming that we use the same probability distribution when computing the error rate on the whole input space? That is, we check the error on every single point, but the ones that were more likely to be in the training set get weighted more, so it's an expectation over all possible training sets picked using the probability distribution, rather than just an error rate over the entire input space with uniform distribution?

Otherwise it doesn't make sense to me. Seems like we could rig the training set to make our cockamamie hypothesis look good.
Reply With Quote
  #2  
Old 01-30-2013, 12:09 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: VC dimension independent of probability distribution

Quote:
Originally Posted by Anne Paulson View Post
But now I'm confused. Are we assuming that we use the same probability distribution when computing the error rate on the whole input space? That is, we check the error on every single point, but the ones that were more likely to be in the training set get weighted more, so it's an expectation over all possible training sets picked using the probability distribution, rather than just an error rate over the entire input space with uniform distribution?

Otherwise it doesn't make sense to me. Seems like we could rig the training set to make our cockamamie hypothesis look good.
The source of randomization in the VC inequality is the choice of the training set {\cal D}. The assumption is that this set is generated according to some probability distribution P on X, independently from one data point in {\cal D} to the other. The same probability distribution is used to compute E_{\rm out} by averaging over the whole input space {\cal X}.

Now, the statement that the VC inequality is independent of the probability distribution means that it holds for any probability distribution. Any training set you pick, according to any probability distribution (or even a rigged training set for that matter) will have at most as many dichotomies as the value of the growth function, since the growth function is defined as a maximum. Since that value is what is needed for the proof of the VC inequality, the inequality will always hold (more loosely for certain probability distributions, but will nonetheless hold).

Having said that, we are still not allowed to "rig" the choice of the training set {\cal D}, not because the number of dichotomies will be a problem -- it won't be, but because the basic premise of Hoeffding, on which the VC inequality is built, is that the points in {\cal D} are picked independently according to some probability distribution. You can rig the probability distribution if you want, but you still have to pick your data points independently from it, and use the same probability distribution to compute E_{\rm out}.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #3  
Old 01-30-2013, 02:01 PM
Anne Paulson Anne Paulson is offline
Senior Member
 
Join Date: Jan 2013
Location: Silicon Valley
Posts: 52
Default Re: VC dimension independent of probability distribution

"You can rig the probability distribution if you want, but you still have to pick your data points independently from it, and use the same probability distribution to compute E-out ."

Great, that's what I wanted to know. Thanks.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 07:47 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.