View Single Post
  #27  
Old 04-09-2012, 08:53 PM
eghri eghri is offline
Junior Member
 
Join Date: Apr 2012
Posts: 1
Default Re: Perceptron Learning Algorithm

Quote:
Originally Posted by davies View Post
Hello Professor,

I have a question regarding updating w on each PLA iteration. If we always assign x0 = 1, how are we reasonably updating w0? By the vector addition, it will always be updated by the value y * x0 = -1 or +1, and if the true w0 is not an integer the PLA will never be able to converge to that value.

Would it be more appropriate to setup such that we divide each component of the true w by w0, so that w0 = 1 always? This way I know it is an integer and my PLA does not have to wander for a non-integer value. Or even, if I know w0 is always 1, I might not even include it in the PLA since I know it is 1 by setup.

Thank you,
-Aaron
I had the exact same question. I believe your method of updating is the correct way:

w_0 = w_0 + y_i

However, there is usually a learning rate associated with the perceptron such as alpha, which would make the update on the intercept:

w_0 = w_0 + alpha * y_i

So you can see here that the algorithm would accommodate non-integer values. In our case, without a learning rate, we just have to hope it converges with an integer value intercept.

I actually had one case myself where it wouldn't converge. To avoid biasing my average results, I'm going to just run the algorithm to 100k iterations and throw out anything that doesn't fully converge.
Reply With Quote