Quote:
Originally Posted by melipone
Thanks. Okay, so if I take the derivative of  for the regularization, I just add  to the gradient in the update of each weight in stochastic gradient descent. Is that correct?
I was also looking into L1 and L2 regularization. That would be L2 regularization above. My understanding is that L1 regulation would just add a penalty term to the gradient regardless of the weight itself. Is my understanding correct?
TIA
|
Indeed, you add the linear term to get the new gradient. L2 and L1 define the regularization term based on squared value and absolute value, respectively. What is added to the gradient is the derivative of that.