View Single Post
  #1  
Old 01-30-2013, 12:36 PM
Anne Paulson Anne Paulson is offline
Senior Member
 
Join Date: Jan 2013
Location: Silicon Valley
Posts: 52
Question VC dimension independent of probability distribution

In Lecture 7, we learn, I think, that if we have a finite VC dimension, then whatever our error rate is on our chosen hypothesis g, that error rate will generalize to all of our input space X, subject to the bounds we know about. That is, with at least some probability that we can compute, the error rate on our training set will be close to the error rate on the whole input space.



And we further learn, I think, that this generalization is true independent of the probability distribution we used to choose our input set.

But now I'm confused. Are we assuming that we use the same probability distribution when computing the error rate on the whole input space? That is, we check the error on every single point, but the ones that were more likely to be in the training set get weighted more, so it's an expectation over all possible training sets picked using the probability distribution, rather than just an error rate over the entire input space with uniform distribution?

Otherwise it doesn't make sense to me. Seems like we could rig the training set to make our cockamamie hypothesis look good.
Reply With Quote