- **Chapter 3 - The Linear Model**
(*http://book.caltech.edu/bookforum/forumdisplay.php?f=110*)

- - **LRA -> PLA Effect of Alpha**
(*http://book.caltech.edu/bookforum/showthread.php?t=353*)

LRA -> PLA Effect of AlphaI noticed that when running the Linear Regression on a training data set followed by running the PLA using the same data and LRA weights, that the Learning Rate (Alpha) of the PLA seems to significantly effect the rate of convergence. I am assuming that the optimal size of alpha is directly related to the size of the convergence errors from the Linear Regression.
Is there a way to model this mathematically such that the Alpha parameter can automatically be calculated for each training set? |

Re: LRA -> PLA Effect of AlphaQuote:
The size of alpha doesn't always seem to matter but there are specific cases of where the appropriately assigned is able to drop the number of iterations down by an additional 50%-75%. I am going to chew on this for a little while and see if I can figure out the relationship. |

Re: LRA -> PLA Effect of AlphaNo one ever said the PLA was a *good* algorithm.:p It's only guaranteed to converge
eventually. I'm sure later in the lecture we'll get to better optimization algorithms. |

Re: LRA -> PLA Effect of AlphaQuote:
As the problem is done with then, as you note, the effect is small. What it seems is that if the LRA solution correctly classifies the points, then no cycles of PLA are used, otherwise just about as many as before. The 50% is the cases where no cycles of PLA are used. |

All times are GMT -7. The time now is 05:38 AM. |

Powered by vBulletin® Version 3.8.3

Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.