![]() |
Re: Question 13
Using x = [1 0;0 1;0 -1.00002; -1 0; 0 2.0001; 0 -2; -2 0]; I still get one less s.v. for qp than libsvm (ie same values I get without perturbing). This remains the case when only perturbing one s.v. The most notable change is that the second weight entry grows, although the first and b also change.
Also, thanks to fgpancorbo for the code for getting w & b from libsvm. Useful for the future. |
Re: Question 13
Quote:
In z space X6 and X7 map to the same point in z space, i.e, X6 (0, -2) and X7 (-2,0) map to (3,5). I wonder if this has any effect on the computation. |
Re: Question 13
Quote:
|
Re: Question 13
Using Matlab libsvm I perturbed [0,-1] to [0,-0.94] and it reduced the number of support vectors by one. W and b agree with what others have seen for libsvm.
|
Re: Question 13
I used libsvm and got this question right.
|
Re: Question 13
|
Re: Question 13
As a quick (read minimal effort) check of model equivalence, I compared the predicted results of 1,000,000 randomly selected points within [-3,3][-3,3] using the support vectors from both Octave/QP and Python/libsvm. Only three points were classified differently despite the difference in the number of support vectors returned by the two approaches. I'm certain that an analytical comparison of the support vectors would prove their equivalence; however, it hardly seems necessary given the empirical results.
|
Re: Question 13
I had quite some problems with solving this problem and it forced me to play around with different approaches (qp, libsvm). One more approach to consider is this: Lecture 15, slide 5:
In the case of the kernel used in exercice 13, there is a corresponding transformation, given explicitly on that slide. So why not giving it a try? I got some confidence in the result after reading the slides title:). |
All times are GMT -7. The time now is 09:55 AM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.