![]() |
#1
|
|||
|
|||
![]()
Dear staff,
Would you please make an interpretation on the results of Q8 and Q9? Namely, why does SVM outperform PLA as training set grows? Someone mentioned VC dimension in one of the previous threads. However, I noticed that the VC dimension of SVM actually increases as training set grows: N dVC (# of support vectors) probability of outperforming PLA 10 2.519 0.544 100 2.878 0.668 500 2.905 0.685 Whereas the dVC of PLA remains 3 in all cases. I think dVC may not be a very good approach to this question. |
#2
|
||||
|
||||
![]()
Just to clarify, your concern is that SVM continues to outperform PLA as the training set grows or that it starts to outperform PLA as the training set grows?
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
My concern is that the probability of "SVM outperforms PLA" gets higher and higher as the training set grows. I find this a bit interesting because the VC dimension of SVM seems to approach that of PLA as the training set grows.
|
![]() |
Tags |
pla, svm, vc dimension |
Thread Tools | |
Display Modes | |
|
|