Thread: Exercise 1.12
View Single Post
  #4  
Old 08-02-2016, 01:33 PM
jeffjackson jeffjackson is offline
Junior Member
 
Join Date: Jun 2016
Posts: 5
Default Re: Exercise 1.12

Following up on my earlier reply: In my thread regarding Section 1.3 I have presented an argument for the feasibility of learning that, if accepted, allows us to promise something a good bit stronger than what I offered earlier, which was based only on Hoeffding. The stronger promise is this:
(e) Assuming that you are given sufficient data and/or allowed a sufficiently large error so that the Hoeffding probability \delta is ultra low, such as 10^{-15}, you will either produce a hypothesis g that approximates f well out of sample, or you will declare that you have failed.
Put another way, given my argument for feasible learning and given the \delta assumption above, I can in good conscience promise that whenever I produce a hypothesis, it is a good approximation to the target. That is, it is reasonable for me to promise that I will never output a poor-approximating hypothesis.
Reply With Quote