View Single Post
Old 02-22-2013, 04:38 PM
htlin's Avatar
htlin htlin is offline
Join Date: Aug 2009
Location: Taipei, Taiwan
Posts: 601
Default Re: Why not use soft margin SVM everytime?

Originally Posted by View Post
Thank you very much, Professor. You are right, the QP solution would probably not return anything reasonable.
As I mentioned earlier, we don't know about the linear separability of the data, or at least, we can't know without looking, which would amount to snooping.
And it is in those cases that I feel that the technique of cross validation is invaluable, as it can help one choose among the different kinds of models.
In one of your earlier lectures, you had indicated that the linear models work surprisingly well in most real cases, and we even had a linear model (logistic regression) to handle noise. Do SVMs work well in noise, too? If they do, I wonder why anyone would use the traditional linear models when one can use the power of SVMs.
Linear (soft-margin) SVM is indeed a competitive model in the linear family. In my own experience, linear SVM and logistic regression are comparable to each other in practice. In fact if you check the error function similar to what we do in Exercise 3.9 of the LFD book, you'll see the similarity between SVM and (regularized) logistic regression.

In general if I have something that needs soft outputs I'll use logistic regression, and in other cases I prefer SVM. But the preference is more personal than objective. :-)

From the perspective of optimization, logistic regression and linear regression are arguably easier problems than linear SVM, by the way. But nowadays such a difference of difficulty in optimization is usually not a big deal.

Nonlinear SVM is another story. With the power of kernels, the overfitting problem needs to be resolved more carefully by parameter/kernel selection, and the optimization problem becomes much harder to solve. Those are part of the reasons that the linear family (including the linear SVM) can and shall still be a first-hand choice.

Hope this helps.
When one teaches, two learn.
Reply With Quote