Re: Zeroth regularization coefficient
This is an interesting question. It is usually right to regularize the zeroth order term. In general, regularization introduces a bias (whether you are regularizing the zeroth order term or not). The benefit is to reduce the variance more than the bias increases. Intuitively it does not look like shrinking the constant term is combatting complexity, but complexity is just the deterministic noise. Regularization combats both deterministic and stochastic noise.
The following exercise will hopefully convince you. Consider just random draws from a Gaussian distribution for which you wish to estimate the mean (a constant target function). It is well known that the minimum variance unbiased estimator is the sample mean. However, that is not the estimator that has the minimum expected squared error (recal, expected squared error is bias+var). By using zeroth order regularization you can show that with the right amount of regularization, one gets a better expected squared error by significantly reducing the variance at the expense of a little bias. The amount of regularization you need depends on N, the true mean and (the variance of the Gaussian distribution)  should not be surprising as we know that regularization only has a job because of the noise. Quote:

All times are GMT 7. The time now is 07:27 AM. 
Powered by vBulletin® Version 3.8.3
Copyright ©2000  2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. AbuMostafa, Malik MagdonIsmail, and HsuanTien Lin, and participants in the Learning From Data MOOC by Yaser S. AbuMostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.