Interpretations on Q8 and Q9?
Dear staff,
Would you please make an interpretation on the results of Q8 and Q9?
Namely, why does SVM outperform PLA as training set grows?
Someone mentioned VC dimension in one of the previous threads. However, I noticed that the VC dimension of SVM actually increases as training set grows:
N dVC (# of support vectors) probability of outperforming PLA
10 2.519 0.544
100 2.878 0.668
500 2.905 0.685
Whereas the dVC of PLA remains 3 in all cases. I think dVC may not be a very good approach to this question.
|