LFD Book Forum Basic logistic regression question
 Register FAQ Calendar Mark Forums Read

#1
05-04-2013, 05:09 AM
 arcticblue Member Join Date: Apr 2013 Posts: 17
Basic logistic regression question

From lecture 9 page 23 of the slides there is an algorithm of how to implement logistic regression. In step 3 it explains how to compute the gradient. Is the E-in value actually a vector or is it a single number? If it's a single number then the weight would be the same for every value in the weight vector so it seems like E-in is a vector. Is my understanding correct?

And if it's a vector then I'm a little unclear on how to compute the values. Each training point has two values x1 and x2 and an outcome y. So to calculate E-in do I just use x1 and weight1 to find the first value and then use x2 and weight2 to find the second value?

Hopefully the above makes sense, I seem to be struggling with something that seems like it should be pretty simple.
#2
05-04-2013, 11:10 AM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: Basic logistic regression question

Quote:
 Originally Posted by arcticblue From lecture 9 page 23 of the slides there is an algorithm of how to implement logistic regression. In step 3 it explains how to compute the gradient. Is the E-in value actually a vector or is it a single number? If it's a single number then the weight would be the same for every value in the weight vector so it seems like E-in is a vector. Is my understanding correct?
is a scalar quantity whose value depends on the entire vector of weights . Step 3 computes the gradient, which is a vector of derivatives of with respect to each of those weights, so each component of the gradient vector is the derivative with respect to a different weight
__________________
Where everyone thinks alike, no one thinks very much
#3
05-04-2013, 09:27 PM
 arcticblue Member Join Date: Apr 2013 Posts: 17
Re: Basic logistic regression question

Okay I think I see how that works but I'm still struggling to understand Q8. In the question I set the weights to 0. Then the first time through the loop I will calculate the gradient of E_in using the formula in step 3. Because w is all zeros the denominator will end up as 1+ e^0 == 2. The numerator can at most be +/-1/ So the biggest change in gradient is +/-0.5 for each weight.

Then in step 4 I update the weights
w(1) = 0,0,0 - 0.01(0.5,0.5,0.5)
w(1) = (0.005,0.005,0.005)

Now the question states stop the algorithm when ||w(t-1) and w(t)|| < 0.01. So:
sqrt((0-0.005)^2+(0-0.005)^2+(0-0.005)^2) = 0.008
So based on the values above the algorithm will stop after the first iteration because the difference in weights is < 0.01.

Have I misunderstood the gradient of E_in formula? Or am I calculating my error incorrectly? I've tried using batch gradient descent and see the above results (I have 100 data points but the error still ends up less than 0.01.) I've also tried stochastic gradient descent and have similar problems.

I've watched lecture 9 a couple of times now and seem to understand how the algorithm works but I guess my understanding isn't complete.

Any suggestions would be most appreciated.
#4
05-04-2013, 10:38 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: Basic logistic regression question

Quote:
 Originally Posted by arcticblue Now the question states stop the algorithm when ||w(t-1) and w(t)|| < 0.01.
The values are calculated at the end of an epoch per the preamble to Question 8, so you need to run through all the examples first to get to the end of the epoch.
__________________
Where everyone thinks alike, no one thinks very much

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 02:36 AM.