View Single Post
  #1  
Old 04-30-2012, 05:25 PM
shockwavephysics shockwavephysics is offline
Member
 
Join Date: Apr 2012
Posts: 17
Question bias and variance - definition of g bar

When considering bias and variance, the bias is defined as the squared difference between gbar and f. The lecture said that gbar is the expected value of g. The book said that one can think of this as the average of many g's returned by runnign the training algorithm on a large number of instanciations of data sets. I have two questions:

1. If g has multiple parameters, do you average the curves, or do you average the individual parameters (or does it matter)?

2. When the book says we can think of it this way, does it mean this is not the exact definition? The point of bias is to isolate the part of the error that has nothing to do with the errors caused by sample data set, or the noise in the measurement. Is there a reason why the bias is not determined by simply minimizing the squared error between the target function, f, and the form of the hypothesis set, and returning the value of that minimum. Alternatively, would it not be just as good to create a (digitized) set of all possible g's and calculate the squared error, and return the smallest error calculated. I tried this for the H=b and f=sin(pi*x) case, and I got bias=.5 .
Reply With Quote