View Single Post
Old 04-08-2012, 03:22 AM
GraceLAX GraceLAX is offline
Junior Member
Join Date: Apr 2012
Location: LAX
Posts: 4
Default Re: Perceptron Learning Algorithm

Originally Posted by yaser View Post
That would be f (the target function). The symbol g is reserved for the final hypothesis that the PLA will produce (which should be close to f).

The initial function h must be a perceptron rather than a random assignment of \pm 1's.

If you start with a zero weight vector, and take {\rm sign}(0)=0, pick any point for the first iteration. When you update the weight vector, {\bf w}+{\bf x}_n y_n uses the target y_n, so that won't be zero.
Thanks for the clarification. That helped quite a bit.

I think it would be interesting if we can all input our actual numbers and
you later show a histogram of what people entered on their homework
solutions. ;-)

I'm shocked by the speed with which PLA converges. I never would have
guessed that until I actually coded it up. This is a very interesting and
intellectually satisfying exercise!

I'm having a hard time deciding how to answer the multiple choice Q 7-10.
The answer depends upon if I use log or linear scaling.
Aren't CS algorithm efficiencies usually classified in log scaling?
Or am I over-thinking this?

If an algorithm always converges would the Pr(f(x) ne g(x)) = 0?
Reply With Quote