Following up on my earlier reply: In my thread regarding Section 1.3 I have presented an argument for the feasibility of learning that, if accepted, allows us to promise something a good bit stronger than what I offered earlier, which was based only on Hoeffding. The stronger promise is this:
(e) Assuming that you are given sufficient data and/or allowed a sufficiently large error so that the Hoeffding probability
is ultra low, such as
, you will either produce a hypothesis g that approximates f well out of sample, or you will declare that you have failed.
Put another way, given my argument for feasible learning and given the

assumption above, I can in good conscience promise that whenever I produce a hypothesis, it is a good approximation to the target. That is, it is reasonable for me to promise that I will never output a poor-approximating hypothesis.