LFD Book Forum  

Go Back   LFD Book Forum > Book Feedback - Learning From Data > Chapter 4 - Overfitting

Reply
 
Thread Tools Display Modes
  #1  
Old 05-10-2013, 06:21 AM
Elroch Elroch is offline
Invited Guest
 
Join Date: Mar 2013
Posts: 143
Default Zeroth regularization coefficient

There is clearly a lot of scope for varying the form of cost function, as demonstrated by Tikhonov's work (which I have only a passing familiarity with so far. It is neat to note that this is the same guy who has separation axioms named after him in topology ).

But here the question I have is a much simpler one. Elsewhere I have seen a similar one parameter regularization to the one we have used for linear and polynomial hypotheses, with the single exception that the zeroth term is omitted, so only higher order parameters are penalised. The question is whether this is an improvement in some quantifiable sense. The mathematics is only marginally less simple, with the term \lambda I being replaced by the same matrix with the top left term set to zero.

As the simplest example, if our hypothesis set consists just of all constant functions, a single regularization parameter has the effect of replacing the equation:

\theta_0 = \overline { \{y^{(i)}\}_{i\leq N}}

by the equation

\theta_0 = \overline {\left\{{y^{(i)} \over{1 + \lambda}}\right\}_{i\leq N}}

This is systematically underestimating the absolute value of the mean in an interesting but not easily justified way.

The question is whether when the function is more complex, the zeroth regularisation coefficient makes any more sense.

Eg, suppose the hypothesis set is all quadratics without a linear term, i.e. H(\theta_0,\theta_2) = \theta_0 + \theta_2 x^2

Does it make sense to penalise just the \theta_2 term here? [If \theta_0 is penalised, a constant function will again be inaccurately modelled]
Reply With Quote
  #2  
Old 02-07-2014, 05:50 AM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: Zeroth regularization coefficient

This is an interesting question. It is usually right to regularize the zeroth order term. In general, regularization introduces a bias (whether you are regularizing the zeroth order term or not). The benefit is to reduce the variance more than the bias increases. Intuitively it does not look like shrinking the constant term is combatting complexity, but complexity is just the deterministic noise. Regularization combats both deterministic and stochastic noise.

The following exercise will hopefully convince you. Consider just random draws from a Gaussian distribution for which you wish to estimate the mean (a constant target function). It is well known that the minimum variance unbiased estimator is the sample mean. However, that is not the estimator that has the minimum expected squared error (recal, expected squared error is bias+var). By using zeroth order regularization you can show that with the right amount of regularization, one gets a better expected squared error by significantly reducing the variance at the expense of a little bias. The amount of regularization you need depends on N, the true mean and \sigma^2 (the variance of the Gaussian distribution) - should not be surprising as we know that regularization only has a job because of the noise.

Quote:
Originally Posted by Elroch View Post
There is clearly a lot of scope for varying the form of cost function, as demonstrated by Tikhonov's work (which I have only a passing familiarity with so far. It is neat to note that this is the same guy who has separation axioms named after him in topology ).

But here the question I have is a much simpler one. Elsewhere I have seen a similar one parameter regularization to the one we have used for linear and polynomial hypotheses, with the single exception that the zeroth term is omitted, so only higher order parameters are penalised. The question is whether this is an improvement in some quantifiable sense. The mathematics is only marginally less simple, with the term \lambda I being replaced by the same matrix with the top left term set to zero.

As the simplest example, if our hypothesis set consists just of all constant functions, a single regularization parameter has the effect of replacing the equation:

\theta_0 = \overline { \{y^{(i)}\}_{i\leq N}}

by the equation

\theta_0 = \overline {\left\{{y^{(i)} \over{1 + \lambda}}\right\}_{i\leq N}}

This is systematically underestimating the absolute value of the mean in an interesting but not easily justified way.

The question is whether when the function is more complex, the zeroth regularisation coefficient makes any more sense.

Eg, suppose the hypothesis set is all quadratics without a linear term, i.e. H(\theta_0,\theta_2) = \theta_0 + \theta_2 x^2

Does it make sense to penalise just the \theta_2 term here? [If \theta_0 is penalised, a constant function will again be inaccurately modelled]
__________________
Have faith in probability
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 10:26 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.