bias and variance  definition of g bar
When considering bias and variance, the bias is defined as the squared difference between gbar and f. The lecture said that gbar is the expected value of g. The book said that one can think of this as the average of many g's returned by runnign the training algorithm on a large number of instanciations of data sets. I have two questions:
1. If g has multiple parameters, do you average the curves, or do you average the individual parameters (or does it matter)? 2. When the book says we can think of it this way, does it mean this is not the exact definition? The point of bias is to isolate the part of the error that has nothing to do with the errors caused by sample data set, or the noise in the measurement. Is there a reason why the bias is not determined by simply minimizing the squared error between the target function, f, and the form of the hypothesis set, and returning the value of that minimum. Alternatively, would it not be just as good to create a (digitized) set of all possible g's and calculate the squared error, and return the smallest error calculated. I tried this for the H=b and f=sin(pi*x) case, and I got bias=.5 . 
Re: bias and variance  definition of g bar
Quote:

Re: bias and variance  definition of g bar
1. If your g's have a nice simple relationship, as in a polynomial, you can average your polynomials by averaging the coefficients. Distributive property. So it does matter... but you may do it either way and get the same answer.
2. I'm not quite connecting the question (LONG DAY) and leave it available for another to jump on. 
Re: bias and variance  definition of g bar
Quote:
Thanks, that clarifies! 
Re: bias and variance  definition of g bar
I have another, sortof related question about gbar.
The lecture and text implied that gbar doesn't depend on the data set (since it's the expected value over all data sets), but I get different answers for gbar (with different resulting values for the bias) if I minimize the squared error over a thousand data points and average several of those, vs. minimizing the squared error over 2 data points a couple million times. Does this mean I must be doing something wrong? Or is it expected that the size of your data sets can correctly give you different gbars, even though it doesn't depend on a data set? 
Re: bias and variance  definition of g bar

Re: bias and variance  definition of g bar
I am struggling to replicate the variance of H_1 of Ex. 2.8 in the text. I was able to get the bias correct (and both bias and variance for H_0), as well as getting the related quiz problem correct, so this is really puzzling me.
I'm trying to narrow down where my mistake might be. Can someone please verify whether or not the correct average hypothesis is g_bar(x) = a_mean * x + b_mean where a_mean ~= 0.776 and b_mean ~= 0. I plot that, and it does look like the figure in the book. Also, when I take the standard deviation (over the data sets) of the coefficients a and b, I get std(a) ~= 1.52 std(b) ~= 0.96 Do those look right? I am truly puzzled here! 
Re: bias and variance  definition of g bar
Examination of the two charts for hypothesis H1 in the book brings up an interesting point that may be helpful. The top chart shows various lines (g's) plotted on the graph. Each line is the result of taking two randomly selected points in the x axis, evaluating them to get the y value and then putting a line thru the two points. Those points will always be on the sinusoid but the lines (the g's) representing the associated function g(x) do not have to be. Notice the two outlying lines in the upper left corner of the top chart. They are totally off of the sinusoid but if g(x)  gbar(x) was to be evaluated in that corner then the value of the outlier g's at that x value (close to 1) must be included in the determination of gbar(x). The only way to get those values is to calculate the y value of each associated line at that point.

Re: bias and variance  definition of g bar
I think my individual hypothesis lines are all correct. I have checked that they all go through the two points.

Re: bias and variance  definition of g bar
All the numbers you mention below are approximately correct. You can now explicitly compute bias(x) and var(x) in terms of x, mean(a), mean(b), var(a) and var(b) (mean(b)=0):
Bias is the average of bias(x) over x; var is the average of var(x) over x. Set . One can show that Note: you can also compute the bias and variance via simulation. Quote:

Re: bias and variance  definition of g bar
I'm having doubts about the variance value in example 2.8 since it indicates that the root mean square deviation of the test data from the sinusoid line is 1.3= sqrt(1.69). So the magnitude of the average (a*x+b) difference from (a_mean*x+b_mean) evaluated at a given point on the sinusoid is bigger than the root mean square value (.7071) of the sinusoid that generated the data point in the first place? I'm inclined to doubt that.
The mean square deviation between the slope of each generated line and a_mean is larger than 1.69 so at this point I have no idea where that variance number came from. 
Re: bias and variance  definition of g bar
Quote:

Re: bias and variance  definition of g bar
Yes, I can see that on the charts for example 2.8 but those outlying points do not exert an effect (at a given x) for the averaged g(D)[x] calculation I am using. So I am wrong on both counts!
A careful rereading of page 63 has led me to try averaging over the calculated data set g's at an arbitrary (generic?) point x and using that to calculate the variance of g(D)[x]. This seems to be a step in the right direction since the calculated variance is now a function of that arbitrary x point and has a minimum around x=0 just like the chart in the example 2.8 but based on the values at the extremes and in the middle I can't see how my average variance over the domain [1,1] would be as low as 1.69. We shall see. Thanks so much for your helpful comments, they are really appreciated and this is a great class even if I am a little dense in absorbing some of the material. Have a great day. 
Re: bias and variance  definition of g bar
Finally got it. Thanks to magdon for confirming one part of my calculation, so that I did not need to waste time poring over it. Thanks also to yaser for a tip, in another thread, that helped me a lot. It turns out that I was incorrectly reusing the sample dataset to calculate (via simulation) the variance. Instead, I needed to generate a fresh dataset for that.
It's funny how sometimes making mistakes at first leads to a much more solid understanding later! 
Re: bias and variance  definition of g bar
I've got it too! Repeatedly evaluate var[ g(D)[x] ] over the entire data set with x ranging from 1 to 1 and average those values to get 1.69 ! ! ! Feeling a sense of real accomplishment here.

All times are GMT 7. The time now is 11:54 AM. 
Powered by vBulletin® Version 3.8.3
Copyright ©2000  2022, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. AbuMostafa, Malik MagdonIsmail, and HsuanTien Lin, and participants in the Learning From Data MOOC by Yaser S. AbuMostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.