Apparently quite some people got the same wrong answer (g_hat = 1.4) according to this post:
http://book.caltech.edu/bookforum/showthread.php?t=430.
I am desperately in need of suggestion here to figure out the reason. My thought on this question is pretty straight forward: generate a two point sample:
Xsamples = (rand(2, 1)-0.5)*2;
yn = sin(Xsamples.*pi);
then
a is calculated by
regress(yn, Xsamples) (to minimize the squared error)
Repeat 1000 times and take the average of a (because of uniform distributed input X), that gives me 1.42.
Any advice is appreciated.