View Single Post
  #2  
Old 04-06-2012, 11:51 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,478
Default Re: Perceptron Learning Algorithm

Quote:
Originally Posted by GraceLAX View Post
I drew a line between two points on the xy plane [-1,1] in both directions.

Then I randomly generated another 10 points, assigning them +1 if they fell above the line and -1 if they fell below the line. I stored those values in ideal function g.
That would be f (the target function). The symbol g is reserved for the final hypothesis that the PLA will produce (which should be close to f).

Quote:
Originally Posted by GraceLAX View Post
Then I gave each point a random first guess of +1/-1 as my initial function h.
The initial function h must be a perceptron rather than a random assignment of \pm 1's.

Quote:
Originally Posted by GraceLAX View Post
If I start with all the weights as 0, then w*x = 0 for all points.
At that rate, PLA will never converge.
If you start with a zero weight vector, and take {\rm sign}(0)=0, pick any point for the first iteration. When you update the weight vector, {\bf w}+{\bf x}_n y_n uses the target y_n, so that won't be zero.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote