View Single Post
  #28  
Old 04-09-2012, 09:27 PM
tcristo tcristo is offline
Member
 
Join Date: Apr 2012
Posts: 23
Default Re: Perceptron Learning Algorithm

Quote:
Originally Posted by eghri View Post
However, there is usually a learning rate associated with the perceptron such as alpha, which would make the update on the intercept:

w_0 = w_0 + alpha * y_i

So you can see here that the algorithm would accommodate non-integer values. In our case, without a learning rate, we just have to hope it converges with an integer value intercept.
Take a look at this thread. http://book.caltech.edu/bookforum/sh...43&postcount=6. Using the default alpha of 1 shouldn't have any effect on the number of iterations required to converge.

Quote:
Originally Posted by eghri View Post
I actually had one case myself where it wouldn't converge. To avoid biasing my average results, I'm going to just run the algorithm to 100k iterations and throw out anything that doesn't fully converge.
Assuming the data classes are linearly separable, the PLA should always converge. You might want to plot the training data for those cases where it won't converge. I had a problem during my initial implementation and after reviewing the plot realized that I had a bug in my update method.

Last edited by tcristo; 04-09-2012 at 09:28 PM. Reason: Grammer
Reply With Quote