View Single Post
  #1  
Old 05-10-2013, 06:21 AM
Elroch Elroch is offline
Invited Guest
 
Join Date: Mar 2013
Posts: 143
Default Zeroth regularization coefficient

There is clearly a lot of scope for varying the form of cost function, as demonstrated by Tikhonov's work (which I have only a passing familiarity with so far. It is neat to note that this is the same guy who has separation axioms named after him in topology ).

But here the question I have is a much simpler one. Elsewhere I have seen a similar one parameter regularization to the one we have used for linear and polynomial hypotheses, with the single exception that the zeroth term is omitted, so only higher order parameters are penalised. The question is whether this is an improvement in some quantifiable sense. The mathematics is only marginally less simple, with the term \lambda I being replaced by the same matrix with the top left term set to zero.

As the simplest example, if our hypothesis set consists just of all constant functions, a single regularization parameter has the effect of replacing the equation:

\theta_0 = \overline { \{y^{(i)}\}_{i\leq N}}

by the equation

\theta_0 = \overline {\left\{{y^{(i)} \over{1 + \lambda}}\right\}_{i\leq N}}

This is systematically underestimating the absolute value of the mean in an interesting but not easily justified way.

The question is whether when the function is more complex, the zeroth regularisation coefficient makes any more sense.

Eg, suppose the hypothesis set is all quadratics without a linear term, i.e. H(\theta_0,\theta_2) = \theta_0 + \theta_2 x^2

Does it make sense to penalise just the \theta_2 term here? [If \theta_0 is penalised, a constant function will again be inaccurately modelled]
Reply With Quote