I feel like I'm overthinking Exercise 4.7 (b) and I am hoping for a little bit of insight.
My gut instinct says that
I arrived at this idea by considering that the probability is similar to the standard deviation which is the square root of the variance so since:
![Var[E_{\text{val}}(g^-)] = \frac{1}{K}Var_{x}[e(g^-(x),y)] Var[E_{\text{val}}(g^-)] = \frac{1}{K}Var_{x}[e(g^-(x),y)]](/vblatex/img/987ef544e58e7e373507ff75899ec9cf-1.gif)
and
![P[{g^-(x)}\neq{y}] = P[e(g^-(x),y)] P[{g^-(x)}\neq{y}] = P[e(g^-(x),y)]](/vblatex/img/91e0150136f188b4a7f6801cf19d7d16-1.gif)
does
![Var_{x}[e(g^-(x),y)] = P[{g^-(x)}\neq{y}]^2 Var_{x}[e(g^-(x),y)] = P[{g^-(x)}\neq{y}]^2](/vblatex/img/5ffa06bb441b828d0e7fddb6dcb09241-1.gif)
???
Then for part (c) on the exercise, assuming that the above is true, I used the notion that
![P[{g^-(x)}\neq{y}] \le 0.5 P[{g^-(x)}\neq{y}] \le 0.5](/vblatex/img/c3513caf9cd323a47c371667e9a723f5-1.gif)
because if the probability of error were greater than 0.5 then the learned g would just flip its classification. Therefore this shows that for any

in a classification problem,
![Var[E_{\text{val}}(g^-)] \le \frac{1}{K}0.5^2 Var[E_{\text{val}}(g^-)] \le \frac{1}{K}0.5^2](/vblatex/img/49dbf8208c736e31157fc4fa3eff61c3-1.gif)
and therefore:
Any indication as to whether I'm working along the correct lines would be appreciated!