View Single Post
Old 04-16-2012, 03:32 PM
tcristo tcristo is offline
Join Date: Apr 2012
Posts: 23
Default Re: LRA -> PLA Effect of Alpha

Originally Posted by htlin View Post
For PLA, I cannot recall any. For some more general models like Neural Networks, there are efforts (in terms of optimization) for adaptively changing the \alpha value. BTW, I think the homework problem asks you to take no \alpha (or a naive choice of 1) Hope this helps.
I originally had my \alpha set at one. I was surprised that running the LRA first to preset the weights and then running the PLA didn't significantly decrease the number of iterations required. I am getting a 50% reduction or thereabouts and expected an order of magnitude reduction. When you view it graphically, the LRA does what seems like 98+% of the work most of the time.

The size of alpha doesn't always seem to matter but there are specific cases of where the appropriately assigned \alpha is able to drop the number of iterations down by an additional 50%-75%.

I am going to chew on this for a little while and see if I can figure out the relationship.
Reply With Quote