View Single Post
  #4  
Old 05-11-2013, 04:43 PM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: Lec-11: Overfitting in terms of (bias, var, stochastic-noise)

Very thoughtful questions.

1. Yes, you are correct. The term which overfitting is responsible for is the var. It can occur when there is either deterministic or stochastic noise.

One way to look at this is as follows. Suppose you picked the function that was truly the best. What would your error be? To a good approximation, it would be:

\sigma^2+bias

This is because for most normal learning models, the best hypothesis is approximately the average hypothesis \bar g (see Problem 3.14(a) for an example). That being said, these first two terms in the bias variance decomposition are inevitable, and we can view this as the direct impact of the noise (stochastic and deterministic).

So the additional var term contributing to the error must be resulting from our inability to pick the best hypothesis. But, why are we unable to pick the best hypothesis: because we are being misled by the data. That is, the best hypothesis on the data (having minimum Ein) is not the best hypothesis out-of-sample (which must have higher Ein). By going for the lower Ein hypothesis we are getting a higher Eout hypothesis - we are overfitting the data.

So you can view the var term as the indirect impact of the noise. It is not inevitable per se, but exists because of your `ability' to be misled by the data (ie overfit). The complexity of your model plays a heavy role in your `ability' to be misled since if your model is complicated you have more ways in which to be misled. If the number of data points goes up, approaching infinity, you will not significantly change the direct contribution of the noise; it is the var term that will go down, eventually approaching 0.

The answers to your remaining questions are related to the above discussion as well as to later material in the text.

2. Let's take regularization (validation is a little more complicated). In chapter 4 we will make an explicit connection between regularization and using a `smaller' hypothesis set. So at the end of the day most methods for `braking' effectively result in using a smaller hypothesis set. Regularization does this in a more flexible and `soft' way than simply picking H2 versus H10.

And then, you are right. There is a tradeoff when you reduce the size of the model. You will increase the bias (direct impact) but decrease the var (indirect impact). One of these effects wins and this determines whether you should increase or decrease your model size. In small N high noise settings with complex models, the indirect impact wins and so it pays to regularize.

3. I highly recommend thinking about Exercise 4.3

4. Yes, there is such a thing as underfitting (see chapter 4). This is usually happening when it is the direct impact (bias) that wins over the indirect impact (var). And so, you should increase the size of \mathcal H to reduce the direct impact at the expense of a small increase in the indirect impact. Underfitting occurs when the quality and quantity of your data is very high in relation to your model size.

Quote:
Originally Posted by sptripathi View Post
We have:
total-noise =
var (overfitting-noise ? )
+ bias (deterministic-noise)
+ stochastic-noise

Qs:

1. Is overfitting-noise the var part alone? From Prof’s lecture, I tend to conclude that it is var caused because of attempt to fit stochastic-noise i.e. overfitting-noise really is an interplay of (stochastic-noise -> variance). Need help in interpreting it.

2. When we try to arrest the overfitting, using brakes(regularization) and/or validation, are we really working with overfitting alone ?
In case of validation, we will have a measure of total-error : Is it that the relativity of total-errors across choice of model-complexity(e.g. H2 Vs H10), is giving us an estimate of relative measure of overfitting across choices of hypothesis-complexity?
In case of brakes(regularization) : will the brake really be applied on overfitting alone, and not other parts of total-error, esp bias part ?

3. Consider a case in which target-complexity is 2nd order polynomial and we chose a 2nd order(H2) and a 10th order polynomial(H10) to fit it. How will the overfit and bias vary for the two hypothesis (as N grows on the x-axis)?
Specifically, will the H10 have overfitting (with or without stochastic noise)? Also, H10 should have higher bias compared to H2 ?

4. Is there a notion of underfitting wrt Target-Function ? When we try to fit a 10th order polynomial target-function, with a 2nd order polynomial hypothesis, are we not underfitting ? If so, can we associate underfitting to bias then ? If not, what else ?

Thanks
__________________
Have faith in probability
Reply With Quote