![]() |
VC dimension independent of probability distribution
In Lecture 7, we learn, I think, that if we have a finite VC dimension, then whatever our error rate is on our chosen hypothesis g, that error rate will generalize to all of our input space X, subject to the bounds we know about. That is, with at least some probability that we can compute, the error rate on our training set will be close to the error rate on the whole input space.
And we further learn, I think, that this generalization is true independent of the probability distribution we used to choose our input set. But now I'm confused. Are we assuming that we use the same probability distribution when computing the error rate on the whole input space? That is, we check the error on every single point, but the ones that were more likely to be in the training set get weighted more, so it's an expectation over all possible training sets picked using the probability distribution, rather than just an error rate over the entire input space with uniform distribution? Otherwise it doesn't make sense to me. Seems like we could rig the training set to make our cockamamie hypothesis look good. |
Re: VC dimension independent of probability distribution
"You can rig the probability distribution if you want, but you still have to pick your data points independently from it, and use the same probability distribution to compute E-out ."
Great, that's what I wanted to know. Thanks. |
All times are GMT -7. The time now is 02:16 PM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.