![]() |
Interpretations on Q8 and Q9?
Dear staff,
Would you please make an interpretation on the results of Q8 and Q9? Namely, why does SVM outperform PLA as training set grows? Someone mentioned VC dimension in one of the previous threads. However, I noticed that the VC dimension of SVM actually increases as training set grows: N dVC (# of support vectors) probability of outperforming PLA 10 2.519 0.544 100 2.878 0.668 500 2.905 0.685 Whereas the dVC of PLA remains 3 in all cases. I think dVC may not be a very good approach to this question. |
Re: Interpretations on Q8 and Q9?
Quote:
|
Re: Interpretations on Q8 and Q9?
My concern is that the probability of "SVM outperforms PLA" gets higher and higher as the training set grows. I find this a bit interesting because the VC dimension of SVM seems to approach that of PLA as the training set grows.
|
All times are GMT -7. The time now is 04:36 PM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.