View Single Post
  #10  
Old 02-17-2013, 07:08 PM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: lecture 8: understanding bias

In general one cannot say anything analytical about bias and variance. For example the average hypothesis can be very far from the best hypothesis in the model for arbitrarily constructed hypothesis sets and learning algorithms. For example, the average function need not even be in the hypothesis set. However, what we say about the average function being a good approximation to the best you can do is not that far off for general models used in practice.

Problem 4.11 takes you through one of the few situations where one can say something reasonably technical. We can extrapolate (without proof) the conclusions to the more general setting as follows:

(1) When the model is well specified: this means that the hypothesis set contains the target function or a good approximation to it;

(2) When the noise has zero mean and is well behaved, for example having finite variance;

(3) When the learning algorithm is reasonably "stable", which means that small perturbations in the data set lead to small "proportionate" changes in the learned hypothesis (the learning algorithm version of a bounded first derivative);

Then, the average learned function will be approximately the one you would learn from a data set having zero noise; this zero noise hypothesis will (for reasonable N) be close to the optimal function you could learn and will become more so very quickly with increasing N (think trying to learn a polynomial with noiseless data). The conditions above are reasonably general. It is the 3rd condition that is most important, and one can mostly relax the well specified requirement in practice.

Quote:
Originally Posted by ilya239 View Post
Sorry to be harping on this question, but I just wanted to ask: is there any intuitive way to see that the average hypothesis will be close to the best hypothesis from the hypothesis set, beyond "practical observation"? E.g. for hypothesis sets satisfying certain well-behavedness criteria, such as being parameterized by a finite number of parameters, containing only continuous functions, etc. The lectures rely in crucial ways on this assumption and it would help to get some more intuition for why it is true for the typically used hypothesis sets, if possible.
__________________
Have faith in probability
Reply With Quote