View Single Post
  #36  
Old 04-12-2012, 08:43 AM
mephistoyourfriend mephistoyourfriend is offline
Junior Member
 
Join Date: Apr 2012
Posts: 1
Default Re: Perceptron Learning Algorithm

Quote:
Originally Posted by shockwavephysics View Post
I have been trying to figure out why updating using w -> w + y_n * x_n works at all. I looked up the relevant section in the text, and there are a series of questions for the student that hint at the answer. I followed that logic to it's conclusion and it does seem to show that updating in that way will always give a w that is better (for the misclassified point) than the previous w. However, I cannot figure out how one comes up with this formulation in the first place. Is there a reference to a derivation I can read?
The scalar product between x and w+x is always greater than the scalar product between x and w (unless x is a zero vector), which is obvious once you express it in terms of angles between the vectors and their lengths. Similarly, the product <x,w-x> is always less than <x,w>. Since the sign of the scalar product is the only thing you're interested in, then changing the sign from -1 to +1 is achieved by making the scalar product more positive (rather, less negative), which you do by adding x to w. It's not guaranteed that you can accomplish that in one step (if the length of x is smaller than the length of w and the angle between them is 180 degrees, you certainly can't), but given a sufficient number of steps, you will be able to nudge it into submission. That's how I see it at least.
Reply With Quote