LFD Book Forum SVM to return probabilistic output
 User Name Remember Me? Password
 FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
08-19-2012, 11:51 AM
 rainbow Member Join Date: Jul 2012 Posts: 41
SVM to return probabilistic output

Instead of using the SVM for pure classification, is it possible to return probabilities in the form

or by any other transform?
#2
08-19-2012, 03:45 PM
 htlin NTU Join Date: Aug 2009 Location: Taipei, Taiwan Posts: 601
Re: SVM to return probabilistic output

Yes, the usual one used for SVMs is proposed by Platt:

http://citeseerx.ist.psu.edu/viewdoc...10.1.1.41.1639

which is of the form

and estimates and by a logistic-regression like optimization problem. An improved implementation for calculating and can be found in

Hsuan-Tien Lin, Chih-Jen Lin, and Ruby C. Weng. A Note on Platt's Probabilistic Outputs for Support Vector Machines. Machine Learning, 68(3), 267-276, 2007.

http://www.csie.ntu.edu.tw/~htlin/pa.../plattprob.pdf

Hope this helps.
__________________
When one teaches, two learn.
#3
08-20-2012, 02:49 PM
 rainbow Member Join Date: Jul 2012 Posts: 41
Re: SVM to return probabilistic output

Thanks!
#4
08-22-2012, 10:11 AM
 patrickjtierney Member Join Date: Jul 2012 Location: Toronto, Canada Posts: 33
Re: SVM to return probabilistic output

Yes. Thank you. Very interesting. I read both papers (well, skimmed some parts) and basically followed but I do have a general question.

I can understand A as a saturation factor or gain, but at first glance B is a little confusing. If B is non-zero, then the probability at the decision boundary will not be 1/2.

Is the reason for needing non-zero B that the mapping from Y->T no longer just maps +1 to 1, and -1 to 0, but rather to two values in (0,1) based on the relative number of +1s to -1s?
#5
08-22-2012, 09:21 PM
 samirbajaj Member Join Date: Jul 2012 Location: Silicon Valley Posts: 48
Re: SVM to return probabilistic output

And just out of curiosity - as an extension to the original question:

Can SVMs be used for regression? If so, do they perform better than the regression methods we have learned about in the course?

Thanks.

-Samir
#6
08-23-2012, 03:28 AM
 htlin NTU Join Date: Aug 2009 Location: Taipei, Taiwan Posts: 601
Re: SVM to return probabilistic output

Quote:
 Originally Posted by samirbajaj And just out of curiosity - as an extension to the original question: Can SVMs be used for regression? If so, do they perform better than the regression methods we have learned about in the course? Thanks. -Samir
Yes, there are several extensions of SVM for regression. One of which was proposed by the original SVM author, commonly named -support vector regression. -SVR can be found in common SVM packages such as LIBSVM and shares many interesting properties with the classification one. The other is extended from linear regression, commonly named least-square SVM.

Hope this helps.
__________________
When one teaches, two learn.
#7
08-23-2012, 03:25 AM
 htlin NTU Join Date: Aug 2009 Location: Taipei, Taiwan Posts: 601
Re: SVM to return probabilistic output

Quote:
 Originally Posted by patrickjtierney Yes. Thank you. Very interesting. I read both papers (well, skimmed some parts) and basically followed but I do have a general question. I can understand A as a saturation factor or gain, but at first glance B is a little confusing. If B is non-zero, then the probability at the decision boundary will not be 1/2. Is the reason for needing non-zero B that the mapping from Y->T no longer just maps +1 to 1, and -1 to 0, but rather to two values in (0,1) based on the relative number of +1s to -1s?
You are very right. My personal interpretation is that provides an opportunity to calibrate the boundary of SVM for probability estimates. Recall that SVM roots from large-margin and hence the hyperplane is "right in the middle of the two classes." While arguably, for probability estimates, a good hyperplane (of ) shall be somewhat away from the majority class. So there may be a need to "shift" the hyperplane by .

Hope this helps.
__________________
When one teaches, two learn.

 Thread Tools Display Modes Hybrid Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 02:37 AM.

 Contact Us - LFD Book - Top