- **Chapter 2 - Training versus Testing**
(*http://book.caltech.edu/bookforum/forumdisplay.php?f=109*)

- - **Example 2.2 (.3) - sample randomness**
(*http://book.caltech.edu/bookforum/showthread.php?t=4862*)

Example 2.2 (.3) - sample randomnessWe stated that for Hoeffding's inequality to be valid, it's important that the sample from the "bin" will be random - the E(in).
In example 2.2.3 (Convex set, page 44), it's stated that we choose the sample data to be on the perimeter of a circle (as stated, we need to choose the N points carefully). By choosing the N points that way, (or by using any other careful way), don't we mess with the randomness of the sample? Is it possible that we can't use Hoeffding's inequality following this process at all? |

Re: Example 2.2 (.3) - sample randomnessThe discussion of #dichotomies focuses on what the "worst" number of dichotomies is. Then, when data is sampled (as Hoeffding needs), the number of dichotomies would be no more than the worst case (as discussed with the growth functions). If we can manage to bound the growth functions, we can also bound the "actual # of dichotomies when data is sampled."
Hope this helps. |

All times are GMT -7. The time now is 01:25 AM. |

Powered by vBulletin® Version 3.8.3

Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.