LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 4

Reply
 
Thread Tools Display Modes
  #1  
Old 02-09-2013, 10:41 AM
ilya239 ilya239 is offline
Senior Member
 
Join Date: Jul 2012
Posts: 58
Question lecture 8: understanding bias

The VC dimension is single number that is a property of the hypothesis set.
But, what is "bias of a hypothesis set"? Bias seems to depend also on dataset size and the learning algorithm, since it depends on \bar{g}(x) = \mathbb{E}_\mathcal{D}[g^{(\mathcal{D})}(x)]; g^{(\mathcal{D})}(x) depends on the learning algorithm, and the set of datasets over which the expectation is taken depends on dataset size.

Slide 4 says that bias measures "how well \mathcal{H} can approximate f". Does this mean "with a sufficiently large dataset and a perfect learning algorithm"?
Is the bias of a (hypothesis set, learning algorithm) combination a single value -- the asymptote of the learning curve? Or is there some notion of bias that is a property of a hypothesis set by itself? If the hypothesis set contains the target function, that does not mean the bias is zero, does it? The beginning of the lecture seems to imply otherwise, but if there is no restriction on the learning algorithm, what guarantees that the average function will in fact be close to the target function for large enough dataset size?
Or is it assumed that the learning algorithm always picks a hypothesis which minimizes E_{in}?
Reply With Quote
  #2  
Old 02-09-2013, 02:01 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by ilya239 View Post
The VC dimension is single number that is a property of the hypothesis set.
But, what is "bias of a hypothesis set"? Bias seems to depend also on dataset size and the learning algorithm, since it depends on \bar{g}(x) = \mathbb{E}_\mathcal{D}[g^{(\mathcal{D})}(x)]; g^{(\mathcal{D})}(x) depends on the learning algorithm, and the set of datasets over which the expectation is taken depends on dataset size.
Your observation is correct that the bias-variance analysis is not as general as the VC analysis. The bias does depend on the learning algorithm. It also depends on the number of examples, usually slightly.

Quote:
Slide 4 says that bias measures "how well \mathcal{H} can approximate f". Does this mean "with a sufficiently large dataset and a perfect learning algorithm"?
Is the bias of a (hypothesis set, learning algorithm) combination a single value -- the asymptote of the learning curve? Or is there some notion of bias that is a property of a hypothesis set by itself? If the hypothesis set contains the target function, that does not mean the bias is zero, does it? The beginning of the lecture seems to imply otherwise, but if there is no restriction on the learning algorithm, what guarantees that the average function will in fact be close to the target function for large enough dataset size?
Or is it assumed that the learning algorithm always picks a hypothesis which minimizes E_{in}?
Good questions . What you are saying would hold if we were using the best approximation of f in {\cal H} as the vehicle for measuring the bias. We are not. We are using a "limited resource" version of it that is based on averaging hypotheses that we get from training on a finite set of data points. This version is often close to the best approximation so that's why we can take that liberty.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #3  
Old 02-09-2013, 03:31 PM
ilya239 ilya239 is offline
Senior Member
 
Join Date: Jul 2012
Posts: 58
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by yaser View Post
The bias does depend on the learning algorithm. It also depends on the number of examples, usually slightly.
...
This version is often close to the best approximation so that's why we can take that liberty.
Thanks for the explanation.
In HW4 #4 the average hypothesis is measurably shifted from the hypothesis set member giving the lowest mean squared error. Probably because two-point dataset is too small, i.e. this is not representative of realistic cases?
Reply With Quote
  #4  
Old 02-09-2013, 07:49 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by ilya239 View Post
Thanks for the explanation.
In HW4 #4 the average hypothesis is measurably shifted from the hypothesis set member giving the lowest mean squared error. Probably because two-point dataset is too small, i.e. this is not representative of realistic cases?
Indeed, the fewer the number of points, the more likely that the average hypothesis will differ from the best approximation. The difference tends to be small, though.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #5  
Old 02-11-2013, 11:50 AM
gah44 gah44 is offline
Invited Guest
 
Join Date: Jul 2012
Location: Seattle, WA
Posts: 153
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by ilya239 View Post
Thanks for the explanation.
In HW4 #4 the average hypothesis is measurably shifted from the hypothesis set member giving the lowest mean squared error. Probably because two-point dataset is too small, i.e. this is not representative of realistic cases?
Well, it is also that the two point data set is small relative to the two parameter hypotheses. If you have 100 points, and 99th degree polynomials, it would also have large variance. I will guess that minimizing bias plus variance happens with the number of fit parameters near the square root of the number of points per data set.
Reply With Quote
  #6  
Old 02-11-2013, 12:45 PM
ilya239 ilya239 is offline
Senior Member
 
Join Date: Jul 2012
Posts: 58
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by gah44 View Post
Well, it is also that the two point data set is small relative to the two parameter hypotheses. If you have 100 points, and 99th degree polynomials, it would also have large variance. I will guess that minimizing bias plus variance happens with the number of fit parameters near the square root of the number of points per data set.
Large variance, sure. I was trying to understand why large bias. If you take a huge number of 100-point datasets, learn a hypothesis from each, and take the average value of these, why might it be far from the target function's value?
On the other hand, I'm not sure how to prove that it won't be far
Reply With Quote
  #7  
Old 02-11-2013, 02:04 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by ilya239 View Post
I was trying to understand why large bias. If you take a huge number of 100-point datasets, learn a hypothesis from each, and take the average value of these, why might it be far from the target function's value?
On the other hand, I'm not sure how to prove that it won't be far
It is unlikely (as a practical observation) to be far, but it is likely to be different.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #8  
Old 02-11-2013, 08:43 PM
gah44 gah44 is offline
Invited Guest
 
Join Date: Jul 2012
Location: Seattle, WA
Posts: 153
Default Re: lecture 8: understanding bias

Well, when I wrote that one I was remembering the first time I tried using a polynomial fit program. (It was in Fortran 66, as a hint to how long ago that was.)

I fit an N degree polynomial to N points.

Even so, I believe if you fit a 99th degree polynomials to sets of 100 points you will have a large variance, as did the 1st degree to two points. It won't be easy at all to visualize, though.
Reply With Quote
  #9  
Old 02-17-2013, 06:02 PM
ilya239 ilya239 is offline
Senior Member
 
Join Date: Jul 2012
Posts: 58
Question Re: lecture 8: understanding bias

Quote:
Originally Posted by yaser View Post
It is unlikely (as a practical observation) to be far, but it is likely to be different.
Sorry to be harping on this question, but I just wanted to ask: is there any intuitive way to see that the average hypothesis will be close to the best hypothesis from the hypothesis set, beyond "practical observation"? E.g. for hypothesis sets satisfying certain well-behavedness criteria, such as being parameterized by a finite number of parameters, containing only continuous functions, etc. The lectures rely in crucial ways on this assumption and it would help to get some more intuition for why it is true for the typically used hypothesis sets, if possible.
Reply With Quote
  #10  
Old 02-17-2013, 07:08 PM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: lecture 8: understanding bias

In general one cannot say anything analytical about bias and variance. For example the average hypothesis can be very far from the best hypothesis in the model for arbitrarily constructed hypothesis sets and learning algorithms. For example, the average function need not even be in the hypothesis set. However, what we say about the average function being a good approximation to the best you can do is not that far off for general models used in practice.

Problem 4.11 takes you through one of the few situations where one can say something reasonably technical. We can extrapolate (without proof) the conclusions to the more general setting as follows:

(1) When the model is well specified: this means that the hypothesis set contains the target function or a good approximation to it;

(2) When the noise has zero mean and is well behaved, for example having finite variance;

(3) When the learning algorithm is reasonably "stable", which means that small perturbations in the data set lead to small "proportionate" changes in the learned hypothesis (the learning algorithm version of a bounded first derivative);

Then, the average learned function will be approximately the one you would learn from a data set having zero noise; this zero noise hypothesis will (for reasonable N) be close to the optimal function you could learn and will become more so very quickly with increasing N (think trying to learn a polynomial with noiseless data). The conditions above are reasonably general. It is the 3rd condition that is most important, and one can mostly relax the well specified requirement in practice.

Quote:
Originally Posted by ilya239 View Post
Sorry to be harping on this question, but I just wanted to ask: is there any intuitive way to see that the average hypothesis will be close to the best hypothesis from the hypothesis set, beyond "practical observation"? E.g. for hypothesis sets satisfying certain well-behavedness criteria, such as being parameterized by a finite number of parameters, containing only continuous functions, etc. The lectures rely in crucial ways on this assumption and it would help to get some more intuition for why it is true for the typically used hypothesis sets, if possible.
__________________
Have faith in probability
Reply With Quote
Reply

Tags
bias, lecture 8

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 06:47 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.