View Single Post
  #3  
Old 08-29-2012, 08:18 PM
htlin's Avatar
htlin htlin is offline
NTU
 
Join Date: Aug 2009
Location: Taipei, Taiwan
Posts: 601
Default Re: Calculating w from just support vectors--numerically risky?

Quote:
Originally Posted by tzs29970 View Post
In a numerically perfect world, \alpha_n would be exactly 0 except at the support vectors, and so w=\sum_{n=1}^N \alpha_n y_n x_n would give the same result is w=\sum_{x_n\mathop{is}SV} \alpha_n y_n x_n.

On real computers, of course, we have to deal with the fact that our calculations have limited precision, and so \alpha_n is usually non-zero nearly everywhere.

I found that if I identified the support vectors before calculating w, by looking for \alpha_n>\epsilon for some small \epsilon, and then calculated w just from those support vectors, I did not get a consistent b. If there were 3 support vectors, sometime I'd get the same b from all 3, but maybe half the time I'd get one b from two of them, and the third would give a b that was significantly off.

If, however, I used all the vectors to calculate w, rather than just the support vectors, then I'd get the same b from all the support vectors.

My speculation is that just as the \alpha_n values that are supposed to be 0 are off slightly due to floating point precision issues, so too are those that are supposed to be non-zero, and that when you use ALL of the \alpha's to calculate w the errors are balancing out. When you exclude the ones that were "supposed" to be 0, you increase the error in w. This makes intuitive sense because the QP solver was using all the \alpha's to try to achieve minimization, and so any error should be spread among all of them. If we only have 3 support vectors, and so only use 3 \alpha's, the error will be high because 3 is so small we get high variance. By using all the \alpha's, the variance will be lower, and so the error is closer to the mean error, which should be zero.

Those who had errors on problems 8-10, if you just used the support vectors, and calculated b from one support vector, it might be worth putting in a check to see if you get a different b from different support vectors.
Handling the numerical difficulty is indeed a non-trivial task. There are more issues involved than \alpha_n < \epsilon. For instance, most QP numerical solvers cannot give you the *exact* optimal alpha, but something near the optimal. In that sense, one cannot expect b to be as perfect as math tells us.

In more specific SVM solvers (such as LIBSVM), support vectors are clearly identified (so there won't be the issue of \alpha_n < \epsilon). Then, roughly speaking, b is calculated from the average of possible b that comes from every non-zero \alpha_n. The steps eliminates the effects of the numerical difficulty and that is why specific solvers are ofen preferred over generic QP solvers. Hope this helps.
__________________
When one teaches, two learn.
Reply With Quote