Hello,

I'm trying some things out with the perceptron learning algorithm and I've come across a case where the boundary line moves away from a misclassified point. Can someone shed some light on what I'm doing wrong? Here's what I did:

Choose a point x = <1, 6, 6> (1 is the placeholder). Pretend that it is correctly classified as -1, that is, y(x) = -1.

Now, choose a weight vector w = <-10, 1, 1.5>. w misclassifies x:

y(x) != sign(w*x)

sign(w*x) = sign(<-10, 1, 1.5>*<1, 6, 6>) = sign(-10*1 + 1*6 + 1.5*6) = sign(-10 + 6 + 9) = sign(5) = +1

Such a setup looks like this:

Now, apply the update rule with the weight vector and misclassified example (I'm using slightly different notation than the book):

w(t + 1) = w(t) + y(x)*x

= <-10, 1, 1.5> + (-1)*<1, 6, 6>

= <-10, 1, 1.5> + <-1, -6, -6>

= <-10 + -1, 1 + -6, 1.5 + -6>

= <-11, -5, -4.5>

The new boundary line for w = <-11, -5, -4.5> looks like this:

But the boundary line has moved away from the misclassified point! Why did this happen? What am I doing wrong?