![]() |
|
#1
|
|||
|
|||
![]() |
#2
|
||||
|
||||
![]()
Yes, the usual one used for SVMs is proposed by Platt:
http://citeseerx.ist.psu.edu/viewdoc...10.1.1.41.1639 which is of the form ![]() and estimates ![]() ![]() ![]() ![]() Hsuan-Tien Lin, Chih-Jen Lin, and Ruby C. Weng. A Note on Platt's Probabilistic Outputs for Support Vector Machines. Machine Learning, 68(3), 267-276, 2007. http://www.csie.ntu.edu.tw/~htlin/pa.../plattprob.pdf Hope this helps.
__________________
When one teaches, two learn. |
#3
|
|||
|
|||
![]()
Thanks!
|
#4
|
|||
|
|||
![]()
Yes. Thank you. Very interesting. I read both papers (well, skimmed some parts) and basically followed but I do have a general question.
I can understand A as a saturation factor or gain, but at first glance B is a little confusing. If B is non-zero, then the probability at the decision boundary will not be 1/2. Is the reason for needing non-zero B that the mapping from Y->T no longer just maps +1 to 1, and -1 to 0, but rather to two values in (0,1) based on the relative number of +1s to -1s? |
#5
|
|||
|
|||
![]()
And just out of curiosity - as an extension to the original question:
Can SVMs be used for regression? If so, do they perform better than the regression methods we have learned about in the course? Thanks. -Samir |
#6
|
||||
|
||||
![]() Quote:
![]() ![]() Hope this helps.
__________________
When one teaches, two learn. |
#7
|
||||
|
||||
![]() Quote:
![]() ![]() ![]() Hope this helps.
__________________
When one teaches, two learn. |
![]() |
Thread Tools | |
Display Modes | |
|
|