LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 5

Reply
 
Thread Tools Display Modes
  #1  
Old 02-10-2013, 02:18 AM
ripande ripande is offline
Senior Member
 
Join Date: Jan 2013
Posts: 71
Default Q10 Clarification required

I am not exactly sure of the intent of the question. So would like some clarification.

How do you define perceptron learning algorithm? Is it an algorithm that recalculates weights to classify training example one point at a time ?

I am asking this because to me perceptron learning algothm has two properties:

1. Recalculate weights to classify training example one point
2. adjust weights if a point is misclassified by adding w*x to the weight

Now if I change the mechanism of weight adjustment to SGD, does the algorithm still remain perceptron ?

In short, what is a formal definition of perceptron algorithm ?
Reply With Quote
  #2  
Old 02-10-2013, 02:45 AM
butterscotch butterscotch is offline
Caltech
 
Join Date: Jan 2013
Posts: 43
Default Re: Q10 Clarification required

Quote:
2. adjust weights if a point is misclassified by adding w*x to the weight.
-> i think you might have typo-ed, w(t+1) = w(t) + y(t)*x(t) for perceptron.

Quote:
1. Recalculate weights to classify training example one point
With the updated weights, recalculate y_n for each x_n in the training set, and determine which ones are misclassified.

You are looking for an error function that will essentially make the SGD behave like a perceptron in updating the weights.
Reply With Quote
  #3  
Old 02-10-2013, 03:38 AM
ripande ripande is offline
Senior Member
 
Join Date: Jan 2013
Posts: 71
Default Re: Q10 Clarification required

Perceptron algorithm updates the weight only if the point is misclassified. The same should be true using SGD, correct ?
Reply With Quote
  #4  
Old 02-10-2013, 08:22 AM
colinpriest colinpriest is offline
Junior Member
 
Join Date: Jan 2013
Posts: 9
Default Re: Q10 Clarification required

I think that the question wants us to choose which error measure, when implemented using the SGD method, would produce exactly the same change to weights as the perceptron learning algorithm taught back at the start of the course.
So if there is no error for the selected data point, then there is no change to the weights. If there is an error for the selected data point, then the weights are increased by yn * xn

Is that the understanding of others?
Reply With Quote
  #5  
Old 02-10-2013, 11:13 AM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: Q10 Clarification required

Quote:
Originally Posted by colinpriest View Post
I think that the question wants us to choose which error measure, when implemented using the SGD method, would produce exactly the same change to weights as the perceptron learning algorithm taught back at the start of the course.
So if there is no error for the selected data point, then there is no change to the weights. If there is an error for the selected data point, then the weights are increased by yn * xn

Is that the understanding of others?
This is the correct understanding.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #6  
Old 05-06-2013, 02:50 PM
Michael Reach Michael Reach is offline
Senior Member
 
Join Date: Apr 2013
Location: Baltimore, Maryland, USA
Posts: 71
Default Re: Q10 Clarification required

Ah - that makes sense. I had just looked for any old error function that will solve the Perceptron classification problem (one of them seemed like it would obviously work), and got the wrong answer.
Reply With Quote
  #7  
Old 05-08-2013, 07:05 AM
jlaurentum jlaurentum is offline
Member
 
Join Date: Apr 2013
Location: Venezuela
Posts: 41
Default Re: Q10 Clarification required

Hello all:

I answered this question incorrectly. I think my confusion arises from considering that the error function must be differentiable, because we are after all taking the gradient vector. In reading the following:

Quote:
So if there is no error for the selected data point, then there is no change to the weights. If there is an error for the selected data point, then the weights are increased by yn * xn
I realize that it contains the answer if you read between the lines, but what if the implied answer is not differentiable?
Reply With Quote
  #8  
Old 05-08-2013, 04:09 PM
catherine catherine is offline
Member
 
Join Date: Apr 2013
Posts: 18
Default Re: Q10 Clarification required

I had the same doubt though i still chose e as it was the only option resulting in the expected behaviour, ie do not update the weights in case of no error.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 04:02 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.