To calculate

I generated several data sets each of two points where y=sin(pi*x). For each dataset, I generated a hypothesis (a slope a) that minimized the squared error for each of the two points. I did this by calculating the error as

, differentiating with respect to a, setting the result to zero and solving for a. This gave me

. If I then average my per-dataset slopes (the a's), I get 1.42.

This seems wrong, not only as it's not an available choice (

) but also because it does not yield a smaller bias than, for example, .79.

I've seen suggestions to use linear regression to calculate the a's, but I don't think that's where I'm going wrong (not only that, but I'm not sure how to do the linear regression without an intercept term).