Re: bias and variance  definition of g bar
I have another, sortof related question about gbar.
The lecture and text implied that gbar doesn't depend on the data set (since it's the expected value over all data sets), but I get different answers for gbar (with different resulting values for the bias) if I minimize the squared error over a thousand data points and average several of those, vs. minimizing the squared error over 2 data points a couple million times.
Does this mean I must be doing something wrong? Or is it expected that the size of your data sets can correctly give you different gbars, even though it doesn't depend on a data set?
