- **Chapter 1 - The Learning Problem**
(*http://book.caltech.edu/bookforum/forumdisplay.php?f=108*)

- - **Exercise 1.10**
(*http://book.caltech.edu/bookforum/showthread.php?t=4676*)

Exercise 1.10Hi,
Right before exercise 1.10, the book states, "The next exercise considers a simple coin experiment that further illustrates the difference between a fixed h and the final hypothesis g selected by the learning algorithm". That statement confuses me a bit because: 1. I don't really see any function (no target function f and no hypothesis h) but the real probability of getting head of a fair coin. No? 2. Cmin illustrates that "if the sample was not randomly selected but picked in a particular way, we would lose the benefit of the probabilistic analysis (Hoeffding Inequality?)" (quoted from page 20). No? Last question, although Cmin is picked in a particular way, if we treat each v from each 10 flips of each coin in each trial from one unique bin (such that the v's from 10 flips from the same coin in 2 different trials come from 2 bins). Then, we can still apply non vanilla version Hoeffding Inequality --P[|Ein(g)-Eout(g)| > ε] <= 2M*e^-2N*(ε^2). Hope I can get some clarification. Thanks! |

Re: Exercise 1.10Small modification to #1:
1. I don't really see any function (no target function f and no hypothesis h) but the *expected* probability of getting head of a fair coin. No? |

All times are GMT -7. The time now is 05:07 PM. |

Powered by vBulletin® Version 3.8.3

Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.