LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 2 - Training versus Testing (http://book.caltech.edu/bookforum/forumdisplay.php?f=109)
-   -   Bias-Variance Analysis (http://book.caltech.edu/bookforum/showthread.php?t=4597)

 Andrew87 03-29-2015 05:40 AM

Bias-Variance Analysis

Hello,

I'm getting confused about . Why is it the best approximation of the target function we could obtain in the unreal case of infinite training sets ?

Thank you in advance,
Andrea

 yaser 03-29-2015 10:48 AM

Re: Bias-Variance Analysis

Quote:
 Originally Posted by Andrew87 (Post 11945) Hello, I'm getting confused about . Why is it the best approximation of the target function we could obtain in the unreal case of infinite training sets ? Thank you in advance, Andrea
It is not necessarily the best approximation of the target function, but it is often close. If we have one, infinite-size training set, and we have infinite computational power that goes with it, we can arrive at the best approximation. In the bias-variance analysis, we are given an infinite number of finite training sets, and we are restricted to using one of these finite training sets at a time, then averaging the resulting hypotheses. This restriction can take us away from the absolute optimal, but usually not by much.

 Andrew87 04-03-2015 07:21 AM

Re: Bias-Variance Analysis

Thank you very much for your answer Prof. Yaser. It clarified my doubt.

My kind regards,
Andrea

 sayan751 06-04-2015 03:23 PM

Re: Bias-Variance Analysis

Hi,

I have a doubt regarding g bar.

I tried to calculate the bias for the second learner, i.e. h(x) = ax + b. So this is how did it:
• Generated around 1000 data points (x ranging from -1 to 1)
• Then picked up two sample data points at random
• Solved for a and b using matrix
• Repeated this process for around 3000 times and
• Lastly took mean for a and mean for b, which formed the g2 bar
• Used this g2 bar for calculating the respective bias, which also matched with the given value of bias

Now I have two questions:
1. Please let me know whether I am proceeding in the right direction or not.
2. When I am trying to repeat this process with a polynomial model instead of linear model, my calculated bias for the polynomial model varies in great margin, even if the sample data points doesn't change. For polynomial as well, I took the mean of the coefficients, but still my answer (both g bar and bias) varies greatly with each run. What I am missing here?

 yaser 06-05-2015 12:35 AM

Re: Bias-Variance Analysis

Quote:
 Originally Posted by sayan751 (Post 11964) 1. Please let me know whether I am proceeding in the right direction or not. 2. When I am trying to repeat this process with a polynomial model instead of linear model, my calculated bias for the polynomial model varies in great margin, even if the sample data points doesn't change. For polynomial as well, I took the mean of the coefficients, but still my answer (both g bar and bias) varies greatly with each run. What I am missing here?
1. Your approach is correct. While sampling from a fixed 1000-point set is not the same as sampling from the whole domain, it should be close enough.

2. Not sure if this is the reason, but if you are still using a 2-point training set, a polynomial model will have too many parameters, leading to non-unique solutions that could vary wildly.

 sayan751 06-05-2015 12:49 AM

Re: Bias-Variance Analysis

Thank You Prof. Yaser for your reply.

I am using a 10 point dataset for the polynomial model. However, the problem I am referring to defines y = f(x) + noise = x + noise.

Previously by mistake I was assuming f(x) as y rather than only x. Later I noticed that all the calculation of bias and variance concentrate purely on f(x). Hence later I ignored the noise and now I am getting stable bias and variance for polynomial model for each run.

 prithagupta.nsit 06-14-2015 03:03 PM

Re: Bias-Variance Analysis

Hello,

I have a few questions if we consider the following model:
Suppose instances x are distributed uniformly in X = [0; 10] and outputs are given by
y = f (x) + e  = x + e;
where  e is an error term with a standard normal distribution.

Now to analyse the decomposition of the generalization error into bias + variance + noise by generating random samples of size N = 10, fitting the models gi, and determining the predictions and prediction errors for x = 0, 1/100,.....,10.

1. During calculations of gbar ,bias and variance won't it be wrong to not consider error during the generation of data sets? if not why?

2. How can we calculate noise separately for the polynomial hypothesis?

3. My understanding to calculate the predictions and prediction errors:
Predictions would be the value given by function gbar on x and prediction error would be the difference of that value from the value generated by function f(x). Am I correct?

Looking forward to a reply :)

 yaser 06-19-2015 01:38 AM

Re: Bias-Variance Analysis

Quote:
 Originally Posted by prithagupta.nsit (Post 11970) Hello, I have a few questions if we consider the following model: Suppose instances x are distributed uniformly in X = [0; 10] and outputs are given by y = f (x) + e  = x + e; where  e is an error term with a standard normal distribution. Now to analyse the decomposition of the generalization error into bias + variance + noise by generating random samples of size N = 10, fitting the models gi, and determining the predictions and prediction errors for x = 0, 1/100,.....,10. 1. During calculations of gbar ,bias and variance won't it be wrong to not consider error during the generation of data sets? if not why? 2. How can we calculate noise separately for the polynomial hypothesis? 3. My understanding to calculate the predictions and prediction errors: Predictions would be the value given by function gbar on x and prediction error would be the difference of that value from the value generated by function f(x). Am I correct? Looking forward to a reply :)
Would you clarify some points as I didn't quite understand the questions? First, I take it that what you referred to as model is the target function (target distribution in this noisy case). If so, what is the learning model (hypothesis set) you are using? Perhaps you can rephrase your three questions after you define the model.

 prithagupta.nsit 06-20-2015 03:51 AM

Re: Bias-Variance Analysis

Dear Prof. Mostafa,

The two hypothesis sets are:

g1(x) = b

g2(x) = α4 . x^4+ α3 x^3 + α2 x^2 + α1 +b

Analyze the decomposition of the generalization error into bias + variance + noise
by generating random samples of size N = 10, fitting the models gi , and determining the predictions and prediction errors for x = 0, 1/100, . . . , 10.

How to generalize noise and during the calculation of bias and variance, how can we ignore the error e in the target function?

How to determine the predictions and prediction errors for different values of x?

 yaser 06-20-2015 05:31 AM

Re: Bias-Variance Analysis

Quote:
 Originally Posted by prithagupta.nsit (Post 11978) How to generalize noise and during the calculation of bias and variance, how can we ignore the error e in the target function? How to determine the predictions and prediction errors for different values of x?
The formula for decomposing the out-of-sample error into bias+variance+noise is discussed in Lecture 11 of the Learning From Data online course, in the part corresponding to slides 18-20.

If you look at this derivation, what you refer to as the error in the target function (which I assume is the noisy part) is not ignored. Also, the formula is given for each value of .

Of course, evaluating these terms explicitly requires knowledge of , which is the case in bias-variance analysis in general. You can calculate them in your example since you spelled out the target. The benefit is to illustrate how these quantities change as you vary the number of data points, the level of noise, etc.

All times are GMT -7. The time now is 08:16 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.