View Single Post
  #1  
Old 03-03-2013, 08:21 AM
Anne Paulson Anne Paulson is offline
Senior Member
 
Join Date: Jan 2013
Location: Silicon Valley
Posts: 52
Default How many support vectors is too many?

In the lectures, it's mentioned that often with an SVM you get the happy news that you have just a few support vectors, and you therefore know that your VC dimension is small and you can expect good generalization.

But how many vectors do you need before you're unhappy? Let's suppose you have a dataset with 7000 observations and 550 variables, an easy supposition for me because I do. Suppose you run an SVM and you discover that with a linear kernel, you have some 700 support vectors, and with a radial kernel, you have some 2000.

That seems like a lot; nearly half the points are support vectors. But if you attack the problem with another machine learning method like neural nets or multinomial regression, you will also have one or two thousand parameters, or maybe more, so you will also have a big VC dimension. So maybe you should be happy with the 2000 support vectors if you look like you're getting good generalization in cross-validation? Or you should be happy regardless of the number of support vectors, if the cross-validation shows good generalization? Or VC dimension doesn't matter at all if the cross-validation news is good?
Reply With Quote