![]() |
#1
|
|||
|
|||
![]()
Hello,
I'm trying some things out with the perceptron learning algorithm and I've come across a case where the boundary line moves away from a misclassified point. Can someone shed some light on what I'm doing wrong? Here's what I did: Choose a point x = <1, 6, 6> (1 is the placeholder). Pretend that it is correctly classified as -1, that is, y(x) = -1. Now, choose a weight vector w = <-10, 1, 1.5>. w misclassifies x: y(x) != sign(w*x) sign(w*x) = sign(<-10, 1, 1.5>*<1, 6, 6>) = sign(-10*1 + 1*6 + 1.5*6) = sign(-10 + 6 + 9) = sign(5) = +1 Such a setup looks like this: ![]() Now, apply the update rule with the weight vector and misclassified example (I'm using slightly different notation than the book): w(t + 1) = w(t) + y(x)*x = <-10, 1, 1.5> + (-1)*<1, 6, 6> = <-10, 1, 1.5> + <-1, -6, -6> = <-10 + -1, 1 + -6, 1.5 + -6> = <-11, -5, -4.5> The new boundary line for w = <-11, -5, -4.5> looks like this: ![]() But the boundary line has moved away from the misclassified point! Why did this happen? What am I doing wrong? |
#2
|
|||
|
|||
![]()
Right after I submitted, I noticed that the red and green regions had switched, so the point is classified correctly. However, when there are other points classified as +1 near <1, 6, 6>, they would all become misclassified, so I'm still a little confused by this example. Please help!
|
#3
|
|||
|
|||
![]()
I have implemented the perceptron and it appears that it works anyway. The book says that the perceptron is guaranteed to converge, even if it makes moves that misclassify other examples.
|
#4
|
|||
|
|||
![]()
Hi Josh
thanks for your post. I was confused as well by the Perceptron example because I expected that, after each update of the weight vector, the misclassified point gets corrected immediately. Indeed, this doesn't necessarily happen at once but (of course) eventually (quoting Radiohead) "everything (ends up) in its right place". Ciao! |
![]() |
Tags |
perceptron |
Thread Tools | |
Display Modes | |
|
|