View Single Post
  #1  
Old 02-22-2013, 05:53 AM
kartikeya_t@yahoo.com kartikeya_t@yahoo.com is offline
Member
 
Join Date: Jul 2012
Posts: 17
Default Why not use soft margin SVM everytime?

Hello all,
I wanted to get your thoughts on somethings the professor said at the end of lecture 15. He said that if the data set is not linearly separable, we can still try the hard margin SVM (notwithstanding the fact that the quadratic programming package would probably complain in that case), and just check the results. I am assuming that by that he meant that E_in would turn out to be rather dismal in that case.
He also said later that the value of C in soft margin SVM can be had from cross validation.
My question comes from a long held feeling of general discomfort I have had about non-linear transformation (NLT). If somebody decides to go to NLT, then she/he must have done so by data snooping, otherwise how would she know that a linear model wouldn't fit? The idea of using validation to decide the number of terms in a possible NLT seems to offer some respite here, by indicating, for example, that the linear model suffices (lowest cross validation error on the linear model).
In the same vein, could we "always" use soft margin SVMs and use cross validation for getting C? That way, if the data set is in fact linearly separable, the result for C would hopefully be 0, reducing the problem to hard-margin?
Thanks for your time.
Reply With Quote