View Single Post
  #1  
Old 08-09-2012, 12:37 AM
hashable hashable is offline
Junior Member
 
Join Date: Jul 2012
Posts: 8
Default Question on Bias Variance Tradeoff

In the book and the lecture, it is said that generally a larger hypothesis set (more complex model) has lower bias and higher variance. This is intuitively explained by the pictures on page 64. Bias is shown as distance of (g-bar) from the target function f. Variance is illustrated with a shaded region around the target function f.

My question is: It appears from the picture that it should be possible to increase the hypothesis space in a way so that it does not include the target function f. E.g. If we include hypotheses in the direction "further away" from the target function f, then we may have managed to still keep the bias high (or even increase the bias).

From this line of reasoning, it appears that adding complexity to a model or a larger hypothesis space does not necessarily imply a decrease in bias (and/or an increase in variance). The decrease in bias occurs only when the hypothesis space grows in a way so that ends up being closer to f. But this need not happen always (theoretically at least).

Is this conclusion correct? If yes, then should it be kept in mind when applying these concepts in practice? Also if this is correct, then could you give some examples that can illustrate how adding complexity can still increase the bias.
Reply With Quote