LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 4

Reply
 
Thread Tools Display Modes
  #1  
Old 02-09-2013, 10:41 AM
ilya239 ilya239 is offline
Senior Member
 
Join Date: Jul 2012
Posts: 58
Question lecture 8: understanding bias

The VC dimension is single number that is a property of the hypothesis set.
But, what is "bias of a hypothesis set"? Bias seems to depend also on dataset size and the learning algorithm, since it depends on \bar{g}(x) = \mathbb{E}_\mathcal{D}[g^{(\mathcal{D})}(x)]; g^{(\mathcal{D})}(x) depends on the learning algorithm, and the set of datasets over which the expectation is taken depends on dataset size.

Slide 4 says that bias measures "how well \mathcal{H} can approximate f". Does this mean "with a sufficiently large dataset and a perfect learning algorithm"?
Is the bias of a (hypothesis set, learning algorithm) combination a single value -- the asymptote of the learning curve? Or is there some notion of bias that is a property of a hypothesis set by itself? If the hypothesis set contains the target function, that does not mean the bias is zero, does it? The beginning of the lecture seems to imply otherwise, but if there is no restriction on the learning algorithm, what guarantees that the average function will in fact be close to the target function for large enough dataset size?
Or is it assumed that the learning algorithm always picks a hypothesis which minimizes E_{in}?
Reply With Quote
  #2  
Old 02-09-2013, 02:01 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by ilya239 View Post
The VC dimension is single number that is a property of the hypothesis set.
But, what is "bias of a hypothesis set"? Bias seems to depend also on dataset size and the learning algorithm, since it depends on \bar{g}(x) = \mathbb{E}_\mathcal{D}[g^{(\mathcal{D})}(x)]; g^{(\mathcal{D})}(x) depends on the learning algorithm, and the set of datasets over which the expectation is taken depends on dataset size.
Your observation is correct that the bias-variance analysis is not as general as the VC analysis. The bias does depend on the learning algorithm. It also depends on the number of examples, usually slightly.

Quote:
Slide 4 says that bias measures "how well \mathcal{H} can approximate f". Does this mean "with a sufficiently large dataset and a perfect learning algorithm"?
Is the bias of a (hypothesis set, learning algorithm) combination a single value -- the asymptote of the learning curve? Or is there some notion of bias that is a property of a hypothesis set by itself? If the hypothesis set contains the target function, that does not mean the bias is zero, does it? The beginning of the lecture seems to imply otherwise, but if there is no restriction on the learning algorithm, what guarantees that the average function will in fact be close to the target function for large enough dataset size?
Or is it assumed that the learning algorithm always picks a hypothesis which minimizes E_{in}?
Good questions . What you are saying would hold if we were using the best approximation of f in {\cal H} as the vehicle for measuring the bias. We are not. We are using a "limited resource" version of it that is based on averaging hypotheses that we get from training on a finite set of data points. This version is often close to the best approximation so that's why we can take that liberty.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #3  
Old 02-09-2013, 03:31 PM
ilya239 ilya239 is offline
Senior Member
 
Join Date: Jul 2012
Posts: 58
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by yaser View Post
The bias does depend on the learning algorithm. It also depends on the number of examples, usually slightly.
...
This version is often close to the best approximation so that's why we can take that liberty.
Thanks for the explanation.
In HW4 #4 the average hypothesis is measurably shifted from the hypothesis set member giving the lowest mean squared error. Probably because two-point dataset is too small, i.e. this is not representative of realistic cases?
Reply With Quote
  #4  
Old 02-09-2013, 07:49 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by ilya239 View Post
Thanks for the explanation.
In HW4 #4 the average hypothesis is measurably shifted from the hypothesis set member giving the lowest mean squared error. Probably because two-point dataset is too small, i.e. this is not representative of realistic cases?
Indeed, the fewer the number of points, the more likely that the average hypothesis will differ from the best approximation. The difference tends to be small, though.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #5  
Old 02-11-2013, 11:50 AM
gah44 gah44 is offline
Invited Guest
 
Join Date: Jul 2012
Location: Seattle, WA
Posts: 153
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by ilya239 View Post
Thanks for the explanation.
In HW4 #4 the average hypothesis is measurably shifted from the hypothesis set member giving the lowest mean squared error. Probably because two-point dataset is too small, i.e. this is not representative of realistic cases?
Well, it is also that the two point data set is small relative to the two parameter hypotheses. If you have 100 points, and 99th degree polynomials, it would also have large variance. I will guess that minimizing bias plus variance happens with the number of fit parameters near the square root of the number of points per data set.
Reply With Quote
  #6  
Old 02-11-2013, 12:45 PM
ilya239 ilya239 is offline
Senior Member
 
Join Date: Jul 2012
Posts: 58
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by gah44 View Post
Well, it is also that the two point data set is small relative to the two parameter hypotheses. If you have 100 points, and 99th degree polynomials, it would also have large variance. I will guess that minimizing bias plus variance happens with the number of fit parameters near the square root of the number of points per data set.
Large variance, sure. I was trying to understand why large bias. If you take a huge number of 100-point datasets, learn a hypothesis from each, and take the average value of these, why might it be far from the target function's value?
On the other hand, I'm not sure how to prove that it won't be far
Reply With Quote
  #7  
Old 02-11-2013, 02:04 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: lecture 8: understanding bias

Quote:
Originally Posted by ilya239 View Post
I was trying to understand why large bias. If you take a huge number of 100-point datasets, learn a hypothesis from each, and take the average value of these, why might it be far from the target function's value?
On the other hand, I'm not sure how to prove that it won't be far
It is unlikely (as a practical observation) to be far, but it is likely to be different.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
Reply

Tags
bias, lecture 8

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 06:28 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.