View Single Post
Old 04-08-2012, 08:32 AM
tcristo tcristo is offline
Join Date: Apr 2012
Posts: 23
Default Re: Impact of Alpha on PLA Converging

Originally Posted by htlin View Post
If you take a deeper look at the steps of the PLA algorithm, you'll find that setting the learning rate to any positive value gives you equivalent results (subject to the same random sequence and equivalent starting weights, of course). For instance, if you start with the zero vector, the final weights that you get for learning rate 1 are simply twice the final weights that you get for learning rate 0.5.
I agree. However, I wouldn't think that would necessarily result in halving the the number of iterations required to converge or result in the "best" answer.

I would expect that if your learning rate is too large it would be possible to "overshoot" the convergence values and therefore require some back and forth before they settle. Depending upon the extent of that oscillation it may or may not require more iterations than a smaller value.

I guess you could also say a similar thing about too small a learning value. It could slowly inch up to one possible set of convergence weight values and get stuck in a "local minima" of sorts without truly finding the "global minima".
Reply With Quote