Quote:
Originally Posted by melipone
We have done regularization for linear regression. How do we get the gradients with regularization for logistic regression?

For linear regression, both the unregularized and the (weight decay) regularized cases had closedform solutions. For logistic regression, both are handled using an iterative method like gradient descent. You write down the error measure and add the regularization term, then carry out gradient descent (with respect to
) on this augmented error. The gradient will be the sum of the gradients of the original error term given in the lecture and the weightdecay term which is quadratic in
(hence its gradient will be linear in
).