 LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Homework 1 (http://book.caltech.edu/bookforum/forumdisplay.php?f=130)
-   -   Impact of Alpha on PLA Converging (http://book.caltech.edu/bookforum/showthread.php?t=286)

 tcristo 04-07-2012 07:02 PM

Impact of Alpha on PLA Converging

Two of the questions (7 & 9) ask how many iterations it takes for the PLA to converge. I would expect this to be a function of both the size of N that was mandated as well as the alpha (learning rate) that is selected. Is this not correct?

 yaser 04-07-2012 07:18 PM

Re: Impact of Alpha on PLA Converging

Quote:
 Originally Posted by tcristo (Post 1021) Two of the questions (7 & 9) ask how many iterations it takes for the PLA to converge. I would expect this to be a function of both the size of N that was mandated as well as the alpha (learning rate) that is selected. Is this not correct?
The PLA rule we use does not have a learning rate (or has a learning rate of 1 if you will). The size indeed affects the number of iterations, and the homework questions specify particular values for .

 tcristo 04-07-2012 07:32 PM

Re: Impact of Alpha on PLA Converging

Quote:
 Originally Posted by yaser (Post 1022) The PLA rule we use does not have a learning rate (or has a learning rate of 1 if you will). The size indeed affects the number of iterations, and the homework questions specify particular values for .
Thanks! I didn't realize that the learning rate wasn't present in the model you had discussed during the first lecture. I had previously run all my data at .5 so it will be interesting to see what the difference is when I set it to 1.

 htlin 04-07-2012 11:25 PM

Re: Impact of Alpha on PLA Converging

If you take a deeper look at the steps of the PLA algorithm, you'll find that setting the learning rate to any positive value gives you equivalent results (subject to the same random sequence and equivalent starting weights, of course). For instance, if you start with the zero vector, the final weights that you get for learning rate 1 are simply twice the final weights that you get for learning rate 0.5. ;)

 tcristo 04-08-2012 07:32 AM

Re: Impact of Alpha on PLA Converging

Quote:
 Originally Posted by htlin (Post 1026) If you take a deeper look at the steps of the PLA algorithm, you'll find that setting the learning rate to any positive value gives you equivalent results (subject to the same random sequence and equivalent starting weights, of course). For instance, if you start with the zero vector, the final weights that you get for learning rate 1 are simply twice the final weights that you get for learning rate 0.5. ;)
I agree. However, I wouldn't think that would necessarily result in halving the the number of iterations required to converge or result in the "best" answer.

I would expect that if your learning rate is too large it would be possible to "overshoot" the convergence values and therefore require some back and forth before they settle. Depending upon the extent of that oscillation it may or may not require more iterations than a smaller value.

I guess you could also say a similar thing about too small a learning value. It could slowly inch up to one possible set of convergence weight values and get stuck in a "local minima" of sorts without truly finding the "global minima".

 htlin 04-08-2012 08:45 AM

Re: Impact of Alpha on PLA Converging

Quote:
 Originally Posted by tcristo (Post 1037) I agree. However, I wouldn't think that would necessarily result in halving the the number of iterations required to converge or result in the "best" answer.
Hinted in my reply is that for PLA in particular, using any positive alpha gives you the same (equivalent) answer with exactly the same number of iterations. So convergence-wise, alpha doesn't affect PLA at all. ;) Not necessarily true for other algorithms, of course.

 tcristo 04-08-2012 10:58 AM

Re: Impact of Alpha on PLA Converging

Quote:
 Originally Posted by htlin (Post 1043) Hinted in my reply is that for PLA in particular, using any positive alpha gives you the same (equivalent) answer with exactly the same number of iterations. So convergence-wise, alpha doesn't affect PLA at all. ;) Not necessarily true for other algorithms, of course.
Intuitively this didn't make any sense to me. However, when running the PLA at different alphas using the same training set I can clearly see what you are saying is correct. Interesting that the weight ratio of x to y is exactly the same and so is the slope regardless of learning alpha.

After thinking about why this is the case I can almost understand it :)

Thanks for following up on my original question!

 lacamotif 04-08-2012 11:52 AM

Re: Impact of Alpha on PLA Converging

What is the range that we should expect the weighting factors to be around? Should it be less than 1, less than 10, or greater?

Any help appreciated - thanks.

 yaser 04-08-2012 12:07 PM

Re: Impact of Alpha on PLA Converging

Quote:
 Originally Posted by lacamotif (Post 1051) What is the range that we should expect the weighting factors to be around? Should it be less than 1, less than 10, or greater? Any help appreciated - thanks.
The size of the different components in can vary significantly based on the data set and the number of iterations.

 kurts 04-08-2012 11:30 PM

Re: Impact of Alpha on PLA Converging

In my simulation, I plotted the random data points and the "true" f(x) line, and it helped me intuitively see that with many points (e.g. N=100), there is much less "wiggle room" for two lines to fit between the same set of "boundary" points (i.e. the set of points that are "closest" to the f(x) line.) With less points (say, N=10), there could be a huge variation in the slope and intercept of two lines that both "fit" the data.

I would think that you could get a better starting point than w = 0 by examining the data at the "boundary" points, where the result y changes from -1 to +1 and somehow use that information?

All times are GMT -7. The time now is 10:49 PM.