Quote:
Originally Posted by eghri
However, there is usually a learning rate associated with the perceptron such as alpha, which would make the update on the intercept:
w_0 = w_0 + alpha * y_i
So you can see here that the algorithm would accommodate noninteger values. In our case, without a learning rate, we just have to hope it converges with an integer value intercept.

Take a look at this thread.
http://book.caltech.edu/bookforum/sh...43&postcount=6. Using the default alpha of 1 shouldn't have any effect on the number of iterations required to converge.
Quote:
Originally Posted by eghri
I actually had one case myself where it wouldn't converge. To avoid biasing my average results, I'm going to just run the algorithm to 100k iterations and throw out anything that doesn't fully converge.

Assuming the data classes are linearly separable, the PLA should always converge. You might want to plot the training data for those cases where it won't converge. I had a problem during my initial implementation and after reviewing the plot realized that I had a bug in my update method.