![]() |
|
#1
|
|||
|
|||
![]()
We have:
total-noise = var (overfitting-noise ? ) + bias (deterministic-noise) + stochastic-noise Qs: 1. Is overfitting-noise the var part alone? From Prof’s lecture, I tend to conclude that it is var caused because of attempt to fit stochastic-noise i.e. overfitting-noise really is an interplay of (stochastic-noise -> variance). Need help in interpreting it. 2. When we try to arrest the overfitting, using brakes(regularization) and/or validation, are we really working with overfitting alone ? In case of validation, we will have a measure of total-error : Is it that the relativity of total-errors across choice of model-complexity(e.g. H2 Vs H10), is giving us an estimate of relative measure of overfitting across choices of hypothesis-complexity? In case of brakes(regularization) : will the brake really be applied on overfitting alone, and not other parts of total-error, esp bias part ? 3. Consider a case in which target-complexity is 2nd order polynomial and we chose a 2nd order(H2) and a 10th order polynomial(H10) to fit it. How will the overfit and bias vary for the two hypothesis (as N grows on the x-axis)? Specifically, will the H10 have overfitting (with or without stochastic noise)? Also, H10 should have higher bias compared to H2 ? 4. Is there a notion of underfitting wrt Target-Function ? When we try to fit a 10th order polynomial target-function, with a 2nd order polynomial hypothesis, are we not underfitting ? If so, can we associate underfitting to bias then ? If not, what else ? Thanks |
#2
|
|||
|
|||
![]() Quote:
1. I think it is fair to say deterministic noise or bias can lead to overfitting as well. For example, suppose you try to model sine functions ![]() ![]() with a hypothesis set made up of positive constant functions only ![]() ![]() This is such a bad hypothesis set for the job that however many data points you use, and however much regularization you use, you'd be better off in general using the single hypothesis consisting of the zero function. I would say this is a clear case of overfitting of the bias. 4. As I understand it, underfitting and overfitting can only ever be defined by contrast with what is possible, hence your first two questions in paragraph 4 are not well-posed. A crucial point emphasised in the lectures is that the appropriate approximation technique (i.e. combination of hypothesis set and regularization) is determined by the data that is available to a greater extent than the actual form of the function. For example, fitting a 10th order polynomial with a 2nd order polynomial hypothesis (without regularization) may easily be overfitting if the data provided is only 3 points. Pondering on these issues a bit, I realise that the missing piece of the jigsaw that is needed to make it possible to make these issues precise and quantitative is the distribution of possible actual functions that we are trying to approximate. I say "distribution" rather than "set", because how likely each function dramatically affects the optimal combination of hypothesis set and regularization, as well as the data that is available. Say, for example, all possible 10th order polynomials on a unit interval are possibilities for some unknown function. Suppose however, that anything that is very far from a quadratic is very unlikely, and increasingly unlikely as the coefficients get bigger (excuse my vagueness, but the idea that the actual function is a 10th order polynomial, but it is extremely unlikely to be much different from a quadratic). Now assume that we are given 3 points and asked to approximate the actual function. If the actual function had been a quadratic, we could just fit it perfectly. Since we know it is very close to a quadratic, we can still be sure that almost all the time a quadratic fit is going to be pretty good. This is by contrast with the situation where the chance of the original function being far from quadratic is high, and using a quadratic to fit 3 points can be wildly overfitting. In this situation, I believe only severe regularization might justify the use of a quadratic at all, and using a simpler model might make more sense. |
#3
|
|||
|
|||
![]()
Thanks Elroch for your detailed reply ( and your patience therein ). That helped.
[ Just one clarification to my first set of Qs. Let's say that we always have 'sufficient' data-points to learn from, for any choice of the 'order of polynomial' in the hypothesis set - i.e for H2 we have >>20 and for H10, we have >> 100 points, and likewise for any other order ] Quote:
Quote:
However, if we had a probability-distribution on target-function's complexity, then a given instance of it will still be a fixed-order polynomial, albeit we may not know what it is. So we will use validation-set to gauge which order of polynomial on hypothesis seems more promising. Right? Quote:
![]() |
#4
|
||||
|
||||
![]()
Very thoughtful questions.
1. Yes, you are correct. The term which overfitting is responsible for is the ![]() One way to look at this is as follows. Suppose you picked the function that was truly the best. What would your error be? To a good approximation, it would be: ![]() This is because for most normal learning models, the best hypothesis is approximately the average hypothesis ![]() So the additional ![]() So you can view the ![]() The answers to your remaining questions are related to the above discussion as well as to later material in the text. 2. Let's take regularization (validation is a little more complicated). In chapter 4 we will make an explicit connection between regularization and using a `smaller' hypothesis set. So at the end of the day most methods for `braking' effectively result in using a smaller hypothesis set. Regularization does this in a more flexible and `soft' way than simply picking H2 versus H10. And then, you are right. There is a tradeoff when you reduce the size of the model. You will increase the bias (direct impact) but decrease the var (indirect impact). One of these effects wins and this determines whether you should increase or decrease your model size. In small ![]() 3. I highly recommend thinking about Exercise 4.3 ![]() 4. Yes, there is such a thing as underfitting (see chapter 4). This is usually happening when it is the direct impact (bias) that wins over the indirect impact (var). And so, you should increase the size of ![]() Quote:
__________________
Have faith in probability |
#5
|
|||
|
|||
![]()
Thanks a lot, Prof Magdon. It feels much better now.
![]() Sure - I'll take the exercises as you suggested. |
![]() |
Thread Tools | |
Display Modes | |
|
|