![]() |
#1
|
|||
|
|||
![]()
Is ensemble learning with voting, an intersection or union of VC dimensions?
|
#2
|
||||
|
||||
![]()
Ensemble learning (covered briefly in Lecture 18) reuses the same hypothesis set by combining the hypotheses in it, so in general it is neither an intersection nor a union. Since the combination can involve only one hypothesis (replicating the original hypothesis set) or multiple hypotheses (resulting possibly in new hypotheses), the VC dimension of the resulting hypothesis set is bigger (at least not smaller) than the original VC dimension.
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
follow up on this question. So, if the final ensemble learned hypothesis set has weights on all the original individual hypothesis sets - does that mean the VC dimension is the union of all the individual hypothesis sets?
It seems in general that ensemble learning might run into the VC dimension / generalization problem (ie similar to 'snooping' when you try a model and then see it doesn't perform well, and then try another model etc..) but since it is used a lot in practice - I'm curious to learn why it doesn't suffer from generalization problems. After doing a little research - is it because generally when using the ensemble learning the individual hypothesis are relatively simple and thus have a low VC dimension (and also perform ok but not great by themselves) therefore, when combining simple models together the VC dimension doesn't get too ridiculous? Thanks |
#4
|
||||
|
||||
![]() Quote:
__________________
When one teaches, two learn. |
![]() |
Thread Tools | |
Display Modes | |
|
|