Thanks. Okay, so if I take the derivative of

for the regularization, I just add

to the gradient in the update of each weight in stochastic gradient descent. Is that correct?

I was also looking into L1 and L2 regularization. That would be L2 regularization above. My understanding is that L1 regulation would just add a penalty term to the gradient regardless of the weight itself. Is my understanding correct?

TIA