View Single Post
  #4  
Old 06-21-2013, 12:53 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: *ANSWER* Q14 about linearly separable by SVM

Quote:
Originally Posted by skwong View Post
(1) In one sense, hard margin SVM is no different from simpler algorithm like PLA for linearly separable data (albeit the result may be different, they are the same in terms of generalization, Ein = 0, ...).
It is no different in having a linear hypothesis set, but it is different in the learning algorithm that chooses a particular hypothesis from that set that happens to maximize the margin.

Quote:
(5) From what I have done in Q14, with hard margin SVM + RBF kernel on 100 data points, it can always separate the data linearly (Ein = 0). And it matches with my understanding.
Your observation is correct.

Quote:
Then, my question is: is RBF regular form not normally used for
supervised training ?

We learn a lot from the final exam paper about the RBF regular form.
As the performance is normally not as good as SVM, also we have no
cue about what is the best K.
People do use regular RBF, but not often, and not as often as they once did. The best K (number of clusters) is a perpetual question in unsupervised learning, with many clever techniques but none conclusive.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote