![]() |
LRA -> PLA Effect of Alpha
I noticed that when running the Linear Regression on a training data set followed by running the PLA using the same data and LRA weights, that the Learning Rate (Alpha) of the PLA seems to significantly effect the rate of convergence. I am assuming that the optimal size of alpha is directly related to the size of the convergence errors from the Linear Regression.
Is there a way to model this mathematically such that the Alpha parameter can automatically be calculated for each training set? |
Re: LRA -> PLA Effect of Alpha
Quote:
![]() The size of alpha doesn't always seem to matter but there are specific cases of where the appropriately assigned ![]() I am going to chew on this for a little while and see if I can figure out the relationship. |
Re: LRA -> PLA Effect of Alpha
No one ever said the PLA was a *good* algorithm.:p It's only guaranteed to converge eventually. I'm sure later in the lecture we'll get to better optimization algorithms.
|
Re: LRA -> PLA Effect of Alpha
Quote:
As the problem is done with ![]() |
All times are GMT -7. The time now is 10:14 AM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.