Quote:
Originally Posted by htlin
Hinted in my reply is that for PLA in particular, using any positive alpha gives you the same (equivalent) answer with exactly the same number of iterations. So convergencewise, alpha doesn't affect PLA at all. Not necessarily true for other algorithms, of course.

Intuitively this didn't make any sense to me. However, when running the PLA at different alphas using the same training set I can clearly see what you are saying is correct. Interesting that the weight ratio of
x to
y is exactly the same and so is the slope regardless of learning alpha.
After thinking about why this is the case I can almost understand it
Thanks for following up on my original question!