View Single Post
  #22  
Old 01-26-2013, 11:56 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: Exercises and Problems

Quote:
Originally Posted by srport@pacbell.net View Post
Question about the pocket algorithm described on p. 80 and illustrated on p. 83.

For data that is not linearly separable, I originally thought the pocket algorithm would run exactly like the PLA and simply take the best performing w vector over the entire iteration range and report it back. However, the description on p. 80 says that the w vector is not updated if Ein is worse.

Suppose after a certain number of iterations that the set of misclassified points is such that every misclassified point used to perform an update returns a worse performing w vector. In this case, the algorithm will never be able to find a better w vector.

My experience watching the PLA as it iterates is that it seems to jump about, and eventually (randomly) converges (with linearly separable data). I've seen proofs on the internet that it will converge, but do not really understand them. My point is that each iteration of the PLA does not necessarily improve g(x), but that in order to reach the best g(x), this jumping about is part of the process.

Can you please comment?
Hi,

There are two {\bf w} vectors that the pocket algorithm keeps track of. The "pocket weight vector" \hat{\bf w} is the one that is not updated if E_{\rm in} gets worse. However, the other weight vector {\bf w}(t), which is identical to its counterpart in PLA, keeps getting updated with every iteration using a misclassified point. The algorithm then reports \hat{\bf w} as its final solution.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote