Quote:
Originally Posted by htlin
For PLA, I cannot recall any. For some more general models like Neural Networks, there are efforts (in terms of optimization) for adaptively changing the  value. BTW, I think the homework problem asks you to take no  (or a naive choice of  ) Hope this helps.
|
I originally had my

set at one. I was surprised that running the LRA first to preset the weights and then running the PLA didn't significantly decrease the number of iterations required. I am getting a 50% reduction or thereabouts and expected an order of magnitude reduction. When you view it graphically, the LRA does what seems like 98+% of the work most of the time.
The size of alpha doesn't always seem to matter but there are specific cases of where the appropriately assigned

is able to drop the number of iterations down by an additional 50%-75%.
I am going to chew on this for a little while and see if I can figure out the relationship.