LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 4 - Overfitting (http://book.caltech.edu/bookforum/forumdisplay.php?f=111)
-   -   Zeroth regularization coefficient (http://book.caltech.edu/bookforum/showthread.php?t=4280)

Elroch 05-10-2013 05:21 AM

Zeroth regularization coefficient
 
There is clearly a lot of scope for varying the form of cost function, as demonstrated by Tikhonov's work (which I have only a passing familiarity with so far. It is neat to note that this is the same guy who has separation axioms named after him in topology :) ).

But here the question I have is a much simpler one. Elsewhere I have seen a similar one parameter regularization to the one we have used for linear and polynomial hypotheses, with the single exception that the zeroth term is omitted, so only higher order parameters are penalised. The question is whether this is an improvement in some quantifiable sense. The mathematics is only marginally less simple, with the term \lambda I being replaced by the same matrix with the top left term set to zero.

As the simplest example, if our hypothesis set consists just of all constant functions, a single regularization parameter has the effect of replacing the equation:

\theta_0 = \overline { \{y^{(i)}\}_{i\leq N}}

by the equation

\theta_0 = \overline {\left\{{y^{(i)} \over{1 + \lambda}}\right\}_{i\leq N}}

This is systematically underestimating the absolute value of the mean in an interesting but not easily justified way.

The question is whether when the function is more complex, the zeroth regularisation coefficient makes any more sense.

Eg, suppose the hypothesis set is all quadratics without a linear term, i.e. H(\theta_0,\theta_2) = \theta_0 + \theta_2 x^2

Does it make sense to penalise just the \theta_2 term here? [If \theta_0 is penalised, a constant function will again be inaccurately modelled]

magdon 02-07-2014 04:50 AM

Re: Zeroth regularization coefficient
 
This is an interesting question. It is usually right to regularize the zeroth order term. In general, regularization introduces a bias (whether you are regularizing the zeroth order term or not). The benefit is to reduce the variance more than the bias increases. Intuitively it does not look like shrinking the constant term is combatting complexity, but complexity is just the deterministic noise. Regularization combats both deterministic and stochastic noise.

The following exercise will hopefully convince you. Consider just random draws from a Gaussian distribution for which you wish to estimate the mean (a constant target function). It is well known that the minimum variance unbiased estimator is the sample mean. However, that is not the estimator that has the minimum expected squared error (recal, expected squared error is bias+var). By using zeroth order regularization you can show that with the right amount of regularization, one gets a better expected squared error by significantly reducing the variance at the expense of a little bias. The amount of regularization you need depends on N, the true mean and \sigma^2 (the variance of the Gaussian distribution) - should not be surprising as we know that regularization only has a job because of the noise.

Quote:

Originally Posted by Elroch (Post 10793)
There is clearly a lot of scope for varying the form of cost function, as demonstrated by Tikhonov's work (which I have only a passing familiarity with so far. It is neat to note that this is the same guy who has separation axioms named after him in topology :) ).

But here the question I have is a much simpler one. Elsewhere I have seen a similar one parameter regularization to the one we have used for linear and polynomial hypotheses, with the single exception that the zeroth term is omitted, so only higher order parameters are penalised. The question is whether this is an improvement in some quantifiable sense. The mathematics is only marginally less simple, with the term \lambda I being replaced by the same matrix with the top left term set to zero.

As the simplest example, if our hypothesis set consists just of all constant functions, a single regularization parameter has the effect of replacing the equation:

\theta_0 = \overline { \{y^{(i)}\}_{i\leq N}}

by the equation

\theta_0 = \overline {\left\{{y^{(i)} \over{1 + \lambda}}\right\}_{i\leq N}}

This is systematically underestimating the absolute value of the mean in an interesting but not easily justified way.

The question is whether when the function is more complex, the zeroth regularisation coefficient makes any more sense.

Eg, suppose the hypothesis set is all quadratics without a linear term, i.e. H(\theta_0,\theta_2) = \theta_0 + \theta_2 x^2

Does it make sense to penalise just the \theta_2 term here? [If \theta_0 is penalised, a constant function will again be inaccurately modelled]



All times are GMT -7. The time now is 06:36 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.