LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 1

Thread Tools Display Modes
Prev Previous Post   Next Post Next
Old 03-30-2014, 12:17 AM
dvs79 dvs79 is offline
Join Date: Jul 2012
Location: Moscow, Russia
Posts: 24
Default Why multiply right side of Hoeffding Inequality by number of hypothesis?

I don't quite understand reasoning here.
I understand that probability of event from many attempts is increasing (the more coins we toss the higher probability to have at least one tails). And we can loosely estimate such probability as P(A or B) <P(A)+P(B), ignoring term -P(A and B) for simplicity.
But why should we apply this estimation to our goal function g(x) if we chose it as the only result of our learning? We don't ask ourselves "What's the probability of bad event (exceeding tolerance to generalization) among ALL our tested hypothesis?". We interested in "What's the probability of the bad event for a certain hypothesis, we somehow chose?" I mean, yes, probability to toss at least 1 tails with 10 coins is very close to 1, but nonetheless, probability to toss tails for each single coin out of those ten is still 0.5, right?
So why to lift the threshold of probability of bad event for our final hypothesis, multiplying right part of inequality by M?
Reply With Quote

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT -7. The time now is 08:06 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.