LFD Book Forum Q10 Clarification required
 User Name Remember Me? Password
 Register FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
02-10-2013, 02:18 AM
 ripande Senior Member Join Date: Jan 2013 Posts: 71
Q10 Clarification required

I am not exactly sure of the intent of the question. So would like some clarification.

How do you define perceptron learning algorithm? Is it an algorithm that recalculates weights to classify training example one point at a time ?

I am asking this because to me perceptron learning algothm has two properties:

1. Recalculate weights to classify training example one point
2. adjust weights if a point is misclassified by adding w*x to the weight

Now if I change the mechanism of weight adjustment to SGD, does the algorithm still remain perceptron ?

In short, what is a formal definition of perceptron algorithm ?
#2
02-10-2013, 02:45 AM
 butterscotch Caltech Join Date: Jan 2013 Posts: 43
Re: Q10 Clarification required

Quote:
 2. adjust weights if a point is misclassified by adding w*x to the weight.
-> i think you might have typo-ed, w(t+1) = w(t) + y(t)*x(t) for perceptron.

Quote:
 1. Recalculate weights to classify training example one point
With the updated weights, recalculate y_n for each x_n in the training set, and determine which ones are misclassified.

You are looking for an error function that will essentially make the SGD behave like a perceptron in updating the weights.
#3
02-10-2013, 03:38 AM
 ripande Senior Member Join Date: Jan 2013 Posts: 71
Re: Q10 Clarification required

Perceptron algorithm updates the weight only if the point is misclassified. The same should be true using SGD, correct ?
#4
02-10-2013, 08:22 AM
 colinpriest Junior Member Join Date: Jan 2013 Posts: 9
Re: Q10 Clarification required

I think that the question wants us to choose which error measure, when implemented using the SGD method, would produce exactly the same change to weights as the perceptron learning algorithm taught back at the start of the course.
So if there is no error for the selected data point, then there is no change to the weights. If there is an error for the selected data point, then the weights are increased by yn * xn

Is that the understanding of others?
#5
02-10-2013, 11:13 AM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: Q10 Clarification required

Quote:
 Originally Posted by colinpriest I think that the question wants us to choose which error measure, when implemented using the SGD method, would produce exactly the same change to weights as the perceptron learning algorithm taught back at the start of the course. So if there is no error for the selected data point, then there is no change to the weights. If there is an error for the selected data point, then the weights are increased by yn * xn Is that the understanding of others?
This is the correct understanding.
__________________
Where everyone thinks alike, no one thinks very much
#6
05-06-2013, 02:50 PM
 Michael Reach Senior Member Join Date: Apr 2013 Location: Baltimore, Maryland, USA Posts: 71
Re: Q10 Clarification required

Ah - that makes sense. I had just looked for any old error function that will solve the Perceptron classification problem (one of them seemed like it would obviously work), and got the wrong answer.
#7
05-08-2013, 07:05 AM
 jlaurentum Member Join Date: Apr 2013 Location: Venezuela Posts: 41
Re: Q10 Clarification required

Hello all:

I answered this question incorrectly. I think my confusion arises from considering that the error function must be differentiable, because we are after all taking the gradient vector. In reading the following:

Quote:
 So if there is no error for the selected data point, then there is no change to the weights. If there is an error for the selected data point, then the weights are increased by yn * xn
I realize that it contains the answer if you read between the lines, but what if the implied answer is not differentiable?
#8
05-08-2013, 04:09 PM
 catherine Member Join Date: Apr 2013 Posts: 18
Re: Q10 Clarification required

I had the same doubt though i still chose e as it was the only option resulting in the expected behaviour, ie do not update the weights in case of no error.

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 07:19 AM.

 Contact Us - LFD Book - Top