Your observations are correct. The bias is only approximately constant. Only for a linear model and linear target is the bias constant. In general, the bias converges very quickly to a constant. This is because there is some "best"

and for any N, the final output g will be "scattered" around this

, sometimes predicting above

on a particular x and sometimes below, on average giving the prediction of

. This results in

being approximately

for any N.
The above discussion does not hold for nonparametric models like Nearest Neighbor which do not fit the paradigm of a fixed hypothesis set. When N increases, the "hypothesis set" gets more "complex" and so the bias decreases with N (as your very nice experiment verifies). I congratulate you on delving deeper into the bias-variance decomposition and discovering this subtle phenomenon. If you would like to know more about this, you may refer to the section on Parametric versus Nonparametric models in e-Chapter 6 and also the discussion of the self-regularizing property of Nearest Neighbor just before section 6.2.2 where we show some pictures to illustrate how the Nearest Neighbor hypothesis gets "more complicated" as you increase N.