LFD Book Forum One-class SVMs
 User Name Remember Me? Password
 FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
03-14-2013, 09:16 AM
 melipone Senior Member Join Date: Jan 2013 Posts: 72
One-class SVMs

Might be off-topic but I'm not sure where it would go since there is no SVM chapter in the book.

I came across one-class SVMs where support vectors are found w/o class separation. How could that be? What is the hyperplane?
#2
03-14-2013, 01:52 PM
 htlin NTU Join Date: Aug 2009 Location: Taipei, Taiwan Posts: 601
Re: One-class SVMs

Quote:
 Originally Posted by melipone Might be off-topic but I'm not sure where it would go since there is no SVM chapter in the book. I came across one-class SVMs where support vectors are found w/o class separation. How could that be? What is the hyperplane?
There are two kinds of common one-class SVM formulations for separating outliers and normal examples without any labeling information. The two are equivalent when using some kernels. They are different in expressing what an "outlier" is.

Perhaps a formulation that's more intuitive is to use the "smallest" hypersphere to bound the normal examples, and then examples falling out of the hypersphere are considered outliers. So roughly, we minimize (the size of the hypersphere + the penalty for being outside the ball).

http://dl.acm.org/citation.cfm?id=960109

The formulation can then be kernelized using the Langrange dual, like the binary SVM discussed in class.

The more popular formulation nowadays consider the "normal" examples as those "far from the origin", and outliers as those close to the origin. In a sense, the observed examples are treated as belonging to the positive class, and the origin is treated as the representative of the negative class. The two classes are separated by a hyperplane. So roughly, we minimize (1 / the margin to the origin + the pentalty for being on the wrong side of the hyperplane). The actual formulation proposed and implemented in solvers like LIBSVM is slightly more sophisticated than that.

http://dl.acm.org/citation.cfm?id=1119749

The formulation can also be kernelized.

Hope this helps.
__________________
When one teaches, two learn.

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 12:39 PM.

 Contact Us - LFD Book - Top