LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Homework 6 (http://book.caltech.edu/bookforum/forumdisplay.php?f=135)
-   -   Question on regularization for logistic regression (http://book.caltech.edu/bookforum/showthread.php?t=3996)

melipone 02-15-2013 09:59 AM

Question on regularization for logistic regression
 
We have done regularization for linear regression. How do we get the gradients with regularization for logistic regression?

yaser 02-15-2013 10:43 AM

Re: Question on regularization for logistic regression
 
Quote:

Originally Posted by melipone (Post 9397)
We have done regularization for linear regression. How do we get the gradients with regularization for logistic regression?

For linear regression, both the unregularized and the (weight decay) regularized cases had closed-form solutions. For logistic regression, both are handled using an iterative method like gradient descent. You write down the error measure and add the regularization term, then carry out gradient descent (with respect to {\bf w}) on this augmented error. The gradient will be the sum of the gradients of the original error term given in the lecture and the weight-decay term which is quadratic in {\bf w} (hence its gradient will be linear in {\bf w}).

melipone 02-16-2013 01:00 PM

Re: Question on regularization for logistic regression
 
Thanks. Okay, so if I take the derivative of \frac{\lambda}{2N}w^Tw for the regularization, I just add \frac{\lambda}{N}w to the gradient in the update of each weight in stochastic gradient descent. Is that correct?

I was also looking into L1 and L2 regularization. That would be L2 regularization above. My understanding is that L1 regulation would just add a penalty term to the gradient regardless of the weight itself. Is my understanding correct?

TIA

yaser 02-16-2013 10:49 PM

Re: Question on regularization for logistic regression
 
Quote:

Originally Posted by melipone (Post 9411)
Thanks. Okay, so if I take the derivative of \frac{\lambda}{2N}w^Tw for the regularization, I just add \frac{\lambda}{N}w to the gradient in the update of each weight in stochastic gradient descent. Is that correct?

I was also looking into L1 and L2 regularization. That would be L2 regularization above. My understanding is that L1 regulation would just add a penalty term to the gradient regardless of the weight itself. Is my understanding correct?

TIA

Indeed, you add the linear term to get the new gradient. L2 and L1 define the regularization term based on squared value and absolute value, respectively. What is added to the gradient is the derivative of that.


All times are GMT -7. The time now is 02:53 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.