LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Homework 1 (http://book.caltech.edu/bookforum/forumdisplay.php?f=130)
-   -   Perceptron Learning Algorithm (http://book.caltech.edu/bookforum/showthread.php?t=284)

 jmknapp 04-22-2012 02:40 PM

Re: Perceptron Learning Algorithm

How is the PLA generalized to handling multi-category classification, e.g., the problem of classifying coins?

 htlin 04-22-2012 08:51 PM

Re: Perceptron Learning Algorithm

Quote:
 Originally Posted by jmknapp (Post 1528) How is the PLA generalized to handling multi-category classification, e.g., the problem of classifying coins?
Interesting question. There are general techniques for extending from binary classification to -category one. For instance, one simple approach (one-versus-all decomposition) is to form the following yes/no questions: "Does an example belong to category k or not?" If the machine learns hypotheses that answer each of the questions correctly, you can combine the answers to form a multi-category prediction. Hope this helps.

 htlin 04-22-2012 08:53 PM

Re: Perceptron Learning Algorithm

Quote:
 Originally Posted by zsero (Post 1304) There is a point what I think is not clearly explained. It took me a long time to realize that it's not true that exactly one point gets corrected in each step. The true statement is that at most one point gets corrected in each step.
The "at most" part may also be tricky, because you may be using point A for correction but "happens" to correct B and C by moving in a good direction. :)

 htlin 04-22-2012 09:01 PM

Re: Perceptron Learning Algorithm

Quote:
 Originally Posted by shockwavephysics (Post 1216) I have been trying to figure out why updating using w -> w + y_n * x_n works at all. I looked up the relevant section in the text, and there are a series of questions for the student that hint at the answer. I followed that logic to it's conclusion and it does seem to show that updating in that way will always give a w that is better (for the misclassified point) than the previous w. However, I cannot figure out how one comes up with this formulation in the first place. Is there a reference to a derivation I can read?
You can read Problem 1.3 of the recommended textbook, which guides you through a simple proof. Roughly speaking, the proof says the PLA weights get more aligned with the underlying "target weights" after each update. :cool:

 jmknapp 04-23-2012 07:21 PM

Re: Perceptron Learning Algorithm

Thanks--the textbook should be arriving from Amazon tomorrow. :)

 ken47 10-10-2012 10:51 PM

Re: Perceptron Learning Algorithm

I adapted an implementation of the single-layer perceptron for Octave (open-source Matlab) that I found online. It runs incredibly slow compared to my nodeJS implementation, but one can see the progress graphically which I think is really cool. Gist attached.

http://gist.github.com/3870213.git

 foodcomazzz 01-12-2013 03:18 AM

Re: Perceptron Learning Algorithm

Thanks so much prof. This thread is very detailed and useful!

All times are GMT -7. The time now is 02:40 AM.