![]() |
Re: lecture 8: understanding bias
Quote:
Quote:
![]() ![]() |
Re: lecture 8: understanding bias
Quote:
In HW4 #4 the average hypothesis is measurably shifted from the hypothesis set member giving the lowest mean squared error. Probably because two-point dataset is too small, i.e. this is not representative of realistic cases? |
Re: lecture 8: understanding bias
Quote:
|
Re: lecture 8: understanding bias
Quote:
|
Re: lecture 8: understanding bias
Quote:
On the other hand, I'm not sure how to prove that it won't be far :) |
Re: lecture 8: understanding bias
Quote:
|
Re: lecture 8: understanding bias
Well, when I wrote that one I was remembering the first time I tried using a polynomial fit program. (It was in Fortran 66, as a hint to how long ago that was.)
I fit an N degree polynomial to N points. Even so, I believe if you fit a 99th degree polynomials to sets of 100 points you will have a large variance, as did the 1st degree to two points. It won't be easy at all to visualize, though. |
Re: lecture 8: understanding bias
Quote:
|
Re: lecture 8: understanding bias
In general one cannot say anything analytical about bias and variance. For example the average hypothesis can be very far from the best hypothesis in the model for arbitrarily constructed hypothesis sets and learning algorithms. For example, the average function need not even be in the hypothesis set. However, what we say about the average function being a good approximation to the best you can do is not that far off for general models used in practice.
Problem 4.11 takes you through one of the few situations where one can say something reasonably technical. We can extrapolate (without proof) the conclusions to the more general setting as follows: (1) When the model is well specified: this means that the hypothesis set contains the target function or a good approximation to it; (2) When the noise has zero mean and is well behaved, for example having finite variance; (3) When the learning algorithm is reasonably "stable", which means that small perturbations in the data set lead to small "proportionate" changes in the learned hypothesis (the learning algorithm version of a bounded first derivative); Then, the average learned function will be approximately the one you would learn from a data set having zero noise; this zero noise hypothesis will (for reasonable N) be close to the optimal function you could learn and will become more so very quickly with increasing N (think trying to learn a polynomial with noiseless data). The conditions above are reasonably general. It is the 3rd condition that is most important, and one can mostly relax the well specified requirement in practice. Quote:
|
All times are GMT -7. The time now is 04:47 PM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.