LFD Book Forum LRA -> PLA Effect of Alpha
 Register FAQ Calendar Mark Forums Read

#1
04-16-2012, 11:58 AM
 tcristo Member Join Date: Apr 2012 Posts: 23
LRA -> PLA Effect of Alpha

I noticed that when running the Linear Regression on a training data set followed by running the PLA using the same data and LRA weights, that the Learning Rate (Alpha) of the PLA seems to significantly effect the rate of convergence. I am assuming that the optimal size of alpha is directly related to the size of the convergence errors from the Linear Regression.

Is there a way to model this mathematically such that the Alpha parameter can automatically be calculated for each training set?
#2
04-16-2012, 02:50 PM
 htlin NTU Join Date: Aug 2009 Location: Taipei, Taiwan Posts: 610
Re: LRA -> PLA Effect of Alpha

Quote:
 Originally Posted by tcristo I noticed that when running the Linear Regression on a training data set followed by running the PLA using the same data and LRA weights, that the Learning Rate (Alpha) of the PLA seems to significantly effect the rate of convergence. I am assuming that the optimal size of alpha is directly related to the size of the convergence errors from the Linear Regression. Is there a way to model this mathematically such that the Alpha parameter can automatically be calculated for each training set?
For PLA, I cannot recall any. For some more general models like Neural Networks, there are efforts (in terms of optimization) for adaptively changing the value. BTW, I think the homework problem asks you to take no (or a naive choice of ) Hope this helps.
__________________
When one teaches, two learn.
#3
04-16-2012, 03:32 PM
 tcristo Member Join Date: Apr 2012 Posts: 23
Re: LRA -> PLA Effect of Alpha

Quote:
 Originally Posted by htlin For PLA, I cannot recall any. For some more general models like Neural Networks, there are efforts (in terms of optimization) for adaptively changing the value. BTW, I think the homework problem asks you to take no (or a naive choice of ) Hope this helps.
I originally had my set at one. I was surprised that running the LRA first to preset the weights and then running the PLA didn't significantly decrease the number of iterations required. I am getting a 50% reduction or thereabouts and expected an order of magnitude reduction. When you view it graphically, the LRA does what seems like 98+% of the work most of the time.

The size of alpha doesn't always seem to matter but there are specific cases of where the appropriately assigned is able to drop the number of iterations down by an additional 50%-75%.

I am going to chew on this for a little while and see if I can figure out the relationship.
#4
04-16-2012, 08:45 PM
 jsarrett Member Join Date: Apr 2012 Location: Sunland, CA Posts: 13
Re: LRA -> PLA Effect of Alpha

No one ever said the PLA was a *good* algorithm. It's only guaranteed to converge eventually. I'm sure later in the lecture we'll get to better optimization algorithms.
#5
02-12-2013, 03:27 AM
 gah44 Invited Guest Join Date: Jul 2012 Location: Seattle, WA Posts: 153
Re: LRA -> PLA Effect of Alpha

Quote:
 Originally Posted by tcristo I originally had my set at one. I was surprised that running the LRA first to preset the weights and then running the PLA didn't significantly decrease the number of iterations required. I am getting a 50% reduction or thereabouts and expected an order of magnitude reduction. When you view it graphically, the LRA does what seems like 98+% of the work most of the time. (snip)

As the problem is done with then, as you note, the effect is small. What it seems is that if the LRA solution correctly classifies the points, then no cycles of PLA are used, otherwise just about as many as before. The 50% is the cases where no cycles of PLA are used.

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 07:38 PM.