Quote:
Originally Posted by hashable
Thanks for the quick reply. I have some follow up questions in regards to variance.
How does increasing bias artificially in this way (by choosing a pathological hypothesis space) affect the variance?
Variance appears to only depend on ḡ and is independent of f. Perhaps it could be considered to indirectly depend of f to the extent that each g tries to approximate f. Thus is variance affected by whether ḡ is close to f or not?
Is it possible to increase complexity/hypothesissetsize without increasing the variance? It is not obvious that this is not possible although the intuitive explanation is that a larger hypothesis set will result in a larger variance.

The dependency of the variance on the target is, as you point out. more complicated. For instance, if you try to learn the constant function, most models will converge with litlle variance, whereas a more complex target will result in bigger variance with the same models. The intuition beyond the bias and variance is valid for a lot of situations, but may not be valid for some.