Hi,
I have a doubt regarding g bar.
I tried to calculate the bias for the second learner, i.e. h(x) = ax + b. So this is how did it:
- Generated around 1000 data points (x ranging from -1 to 1)
- Then picked up two sample data points at random
- Solved for a and b using matrix
- Repeated this process for around 3000 times and
- Lastly took mean for a and mean for b, which formed the g2 bar
- Used this g2 bar for calculating the respective bias, which also matched with the given value of bias
Now I have two questions:
1. Please let me know whether I am proceeding in the right direction or not.
2. When I am trying to repeat this process with a polynomial model instead of linear model, my calculated bias for the polynomial model varies in great margin, even if the sample data points doesn't change. For polynomial as well, I took the mean of the coefficients, but still my answer (both g bar and bias) varies greatly with each run. What I am missing here?