Thread: Q9, SVM vs PLA
View Single Post
  #5  
Old 05-21-2013, 06:51 PM
catherine catherine is offline
Member
 
Join Date: Apr 2013
Posts: 18
Default Re: Q9, SVM vs PLA

Same thing here. I used ipop from the kernlab package in R. I checked Ein and b, they behave as expected, and I'm getting the expected number of support vectors. I also plotted the results for one iteration, they match the figures I'm getting. Still the performance of my SVM model is only marginally better than the performance of a perceptron-based model, especially for N = 100.

Here are the results I'm getting:
For N = 10: SVM fares better than PLA for 63.9 % of the iterations . |EoutSVM| = 0.09050551, where as |EoutPLA| = 0.1221962
For N = 100: Even though for 56.9% of the iterations SVM fares better than PLA, |EoutSVM| = 0.01877277, where as |EoutPLA| = 0.01374174

In a way these results (I mean the fact that PLA catches up on SVM the larger the training set is) match my expectations - though I'm a bit disappointed about the SVM's lack of flamboyance in this particular case - is this because this is completely random data? They don't match the answer key though, according to which the SVM's overall performance as compared to PLA improves with the number of items in the training set.

Note: Not sure this is relevant - I'm using a test set of 1,000 data points.
Reply With Quote