View Single Post
  #1  
Old 04-16-2012, 11:58 AM
tcristo tcristo is offline
Member
 
Join Date: Apr 2012
Posts: 23
Default LRA -> PLA Effect of Alpha

I noticed that when running the Linear Regression on a training data set followed by running the PLA using the same data and LRA weights, that the Learning Rate (Alpha) of the PLA seems to significantly effect the rate of convergence. I am assuming that the optimal size of alpha is directly related to the size of the convergence errors from the Linear Regression.

Is there a way to model this mathematically such that the Alpha parameter can automatically be calculated for each training set?
Reply With Quote