![]() |
#1
|
|||
|
|||
![]()
Hello all,
I wanted to get your thoughts on somethings the professor said at the end of lecture 15. He said that if the data set is not linearly separable, we can still try the hard margin SVM (notwithstanding the fact that the quadratic programming package would probably complain in that case), and just check the results. I am assuming that by that he meant that E_in would turn out to be rather dismal in that case. He also said later that the value of C in soft margin SVM can be had from cross validation. My question comes from a long held feeling of general discomfort I have had about non-linear transformation (NLT). If somebody decides to go to NLT, then she/he must have done so by data snooping, otherwise how would she know that a linear model wouldn't fit? The idea of using validation to decide the number of terms in a possible NLT seems to offer some respite here, by indicating, for example, that the linear model suffices (lowest cross validation error on the linear model). In the same vein, could we "always" use soft margin SVMs and use cross validation for getting C? That way, if the data set is in fact linearly separable, the result for C would hopefully be 0, reducing the problem to hard-margin? Thanks for your time. |
#2
|
||||
|
||||
![]()
The remark about using hard-margin SVM in non-separable cases was not a recommendation. It was to point out that if you use hard-margin SVM and the data turns out to be non-separable after all, this can be easily detected by checking the solution (if QP returns one) on the training data points. This relieves you from the need to determine linear separability before you apply hard-margin SVM. You can of course use soft-margin SVM in all cases.
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
Thank you very much, Professor. You are right, the QP solution would probably not return anything reasonable.
As I mentioned earlier, we don't know about the linear separability of the data, or at least, we can't know without looking, which would amount to snooping. And it is in those cases that I feel that the technique of cross validation is invaluable, as it can help one choose among the different kinds of models. In one of your earlier lectures, you had indicated that the linear models work surprisingly well in most real cases, and we even had a linear model (logistic regression) to handle noise. Do SVMs work well in noise, too? If they do, I wonder why anyone would use the traditional linear models when one can use the power of SVMs. |
#4
|
||||
|
||||
![]() Quote:
In general if I have something that needs soft outputs I'll use logistic regression, and in other cases I prefer SVM. But the preference is more personal than objective. :-) From the perspective of optimization, logistic regression and linear regression are arguably easier problems than linear SVM, by the way. But nowadays such a difference of difficulty in optimization is usually not a big deal. Nonlinear SVM is another story. With the power of kernels, the overfitting problem needs to be resolved more carefully by parameter/kernel selection, and the optimization problem becomes much harder to solve. Those are part of the reasons that the linear family (including the linear SVM) can and shall still be a first-hand choice. Hope this helps.
__________________
When one teaches, two learn. |
#5
|
||||
|
||||
![]() Quote:
![]() ![]() ![]() ![]() In practice it would be difficult to find true hard-margin SVM solvers, and indeed soft-margin works better than hard-margin most of the time (or in some sense soft-margin includes hard-margin as a "special case" when the data is separable). So the procedure that you describe is indeed what most people do for using SVMs. Hope this helps.
__________________
When one teaches, two learn. |
#6
|
|||
|
|||
![]()
Thank you very much for your input, Prof. Lin.
|
![]() |
Thread Tools | |
Display Modes | |
|
|