Quote:
Originally Posted by htlin
Numerical optimization is a difficult problem so it is difficult to define the principled way. In specific packages like LIBSVM, some careful implementations is used to carefully and stably mark at-bound alphas. In general packages for convex programming, this may not be the case.
.
|
The SVM tutorial by Burges indicates that to get the threshold b in practice you just average the b's estimated from the margin support vectors. This is what seems to be done in the Matlab code in
http://users.ecs.soton.ac.uk/srg/pub...ns/pdf/SVM.pdf as well.
I found that even in some of the HW cases (1 versus 5 classification, Q=5 cases) I didnt get round to a set of thresholds that are not significantly different -- I was wondering if that's unexpected or is indicative of a bug in my implementation. On the other hand if I used the above averaging approach (and used some heuristic a0 and b0 to decide the margin SVs) I would probably not be aware of this discrepancy in b values in the first place. Is there a way to get around this?