BiasVariance Analysis

Re: BiasVariance Analysis
It is not necessarily the best approximation of the target function, but it is often close. If we have one, infinitesize training set, and we have infinite computational power that goes with it, we can arrive at the best approximation. In the biasvariance analysis, we are given an infinite number of finite training sets, and we are restricted to using one of these finite training sets at a time, then averaging the resulting hypotheses. This restriction can take us away from the absolute optimal, but usually not by much.

Re: BiasVariance Analysis
Thank you very much for your answer Prof. Yaser. It clarified my doubt.
My kind regards, Andrea 
Re: BiasVariance Analysis
Hi,
I have a doubt regarding g bar. I tried to calculate the bias for the second learner, i.e. h(x) = ax + b. So this is how did it:
Now I have two questions: 1. Please let me know whether I am proceeding in the right direction or not. 2. When I am trying to repeat this process with a polynomial model instead of linear model, my calculated bias for the polynomial model varies in great margin, even if the sample data points doesn't change. For polynomial as well, I took the mean of the coefficients, but still my answer (both g bar and bias) varies greatly with each run. What I am missing here? 
Re: BiasVariance Analysis
Quote:
2. Not sure if this is the reason, but if you are still using a 2point training set, a polynomial model will have too many parameters, leading to nonunique solutions that could vary wildly. 
Re: BiasVariance Analysis
Thank You Prof. Yaser for your reply.
I am using a 10 point dataset for the polynomial model. However, the problem I am referring to defines y = f(x) + noise = x + noise. Previously by mistake I was assuming f(x) as y rather than only x. Later I noticed that all the calculation of bias and variance concentrate purely on f(x). Hence later I ignored the noise and now I am getting stable bias and variance for polynomial model for each run. 
Re: BiasVariance Analysis
Hello,
I have a few questions if we consider the following model: Suppose instances x are distributed uniformly in X = [0; 10] and outputs are given by y = f (x) + e = x + e; where e is an error term with a standard normal distribution. Now to analyse the decomposition of the generalization error into bias + variance + noise by generating random samples of size N = 10, fitting the models gi, and determining the predictions and prediction errors for x = 0, 1/100,.....,10. 1. During calculations of gbar ,bias and variance won't it be wrong to not consider error during the generation of data sets? if not why? 2. How can we calculate noise separately for the polynomial hypothesis? 3. My understanding to calculate the predictions and prediction errors: Predictions would be the value given by function gbar on x and prediction error would be the difference of that value from the value generated by function f(x). Am I correct? Looking forward to a reply :) 
Re: BiasVariance Analysis
Quote:

Re: BiasVariance Analysis
Dear Prof. Mostafa,
The two hypothesis sets are: g1(x) = b g2(x) = α4 . x^4+ α3 x^3 + α2 x^2 + α1 +b Analyze the decomposition of the generalization error into bias + variance + noise by generating random samples of size N = 10, fitting the models gi , and determining the predictions and prediction errors for x = 0, 1/100, . . . , 10. How to generalize noise and during the calculation of bias and variance, how can we ignore the error e in the target function? How to determine the predictions and prediction errors for different values of x? 
All times are GMT 7. The time now is 12:01 PM. 
Powered by vBulletin® Version 3.8.3
Copyright ©2000  2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. AbuMostafa, Malik MagdonIsmail, and HsuanTien Lin, and participants in the Learning From Data MOOC by Yaser S. AbuMostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.