LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 1 - The Learning Problem (http://book.caltech.edu/bookforum/forumdisplay.php?f=108)

 henry2015 09-18-2015 10:26 PM

Hoeffding Inequality

Hi,

On page 22, it says, "the hypothesis h is fixed before you generate the data set, and the
probability is with respect to random data sets D; we emphasize that the assumption "h is fixed before you generate the data set" is critical to the validity of this bound".

Few questions:
1. Does the "data set" in "generate the data set" refer to the marble (which is the data set D) we pick randomly from the jar? Or it refers to the set of outputs (red/green) of h(x) on D?
2. It keeps mentioning "h is fixed before you generate the data set". Does it mean in machine learning, a set of h should be predefined before seeing any training data and no h can be added to the set after seeing the training data?

Thanks!

 yaser 09-18-2015 10:38 PM

Re: Hoeffding Inequality

Quote:
 Originally Posted by henry2015 (Post 12044) 1. Does the "data set" in "generate the data set" refer to the marble (which is the data set D) we pick randomly from the jar? Or it refers to the set of outputs (red/green) of h(x) on D?
The target is assumed to be fixed, so since is also fixed, the colors of all marbles are fixed and picking the data set would mean picking the marbles in the sample.

Quote:
 2. It keeps mentioning "h is fixed before you generate the data set". Does it mean in machine learning, a set of h should be predefined before seeing any training data and no h can be added to the set after seeing the training data?
This is the assumption that the theory is based on. If one wants to add hypotheses after seeing the data and still apply the theory, one should take the set of hypotheses to include all potential hypotheses that may be added (whatever the data set may be).

 henry2015 09-18-2015 11:36 PM

Re: Hoeffding Inequality

Now, I wonder why "we cannot just plug in g for h in the Hoeffding inequality". Given g is one of h's and for each h, Hoeffding inequality is valid for the upper bound of P[|Ein(h) - Eout(h)| > E]. Even g is picked after we look at all the outputs of all h's, g is still one of h's. So Hoeffding inequality should be still valid for g. No?

Thanks!

 yaser 09-19-2015 02:56 AM

Re: Hoeffding Inequality

Quote:
 Originally Posted by henry2015 (Post 12046) Thanks for your quick reply Professor! Now, I wonder why "we cannot just plug in g for h in the Hoeffding inequality". Given g is one of h's and for each h, Hoeffding inequality is valid for the upper bound of P[|Ein(h) - Eout(h)| > E]. Even g is picked after we look at all the outputs of all h's, g is still one of h's. So Hoeffding inequality should be still valid for g. No? Thanks!
This is the main point of this part. Take the coin flipping example, with each of 1000 fair coins flipped 10 times. Hoeffding applies to each coin, right? Now if we pick "g" to be the coin that produced the most heads, we lose the Hoeffding guarantee because the small probability of bad behavior for each coin accumulates into a not-so-small probability of bad behavior of some coin (which we picked deliberately because it behaved badly).

 henry2015 10-01-2015 04:22 AM

Re: Hoeffding Inequality

Hi Professor,

I just have a hard time to understand that how choosing a hypothesis changes a theory -- Hoeffding inequality.

Let's say h1(x) < P1, h2(x) < P2. We choose h2 to be g. Then h2(x) < P2 is no longer true?

I sort of understand your example because we pick the run of coin flipping that produces most heads, and so if we plot the graph, the graph indicates that Hoeffding inequality doesn't apply. But Hoeffding inequality is talking about probability and so the reality might be off a bit.

Maybe I am in a wrong direction? :(

 yaser 10-01-2015 09:38 PM

Re: Hoeffding Inequality

It's a subtle point. There is "cherry picking" if we fish for a sample that has certain properties after many trials, instead of having a sample that is fairly drawn from a fixed hypothesis.

Statements involving probability are tricky because they don't guarantee a particular outcome, just the likelihood of getting that outcome. Therefore, changing the game to allow more trials or different conditions would change the probabilities.

 henry2015 10-04-2015 08:15 PM

Re: Hoeffding Inequality

What the book states "e cannot just plug in g for h in the Hoeffding inequality" means that Hoeffding inequality is still true for g as it is one of the h's. But Hoeffding inequality seems failed for g because we are cherry picking.

Just like flipping an unbiased coin 1 million times, we should see 500K heads and 500K tails, but we might have "bad luck" that we see 1 million heads but the P(head) is still 0.5.

Do I interpret correctly?

Thanks a lot!

 yaser 10-05-2015 07:39 PM

Re: Hoeffding Inequality

Let me rephrase it. Let's say (like in Hoeffding) that a rare event has a probability of at most 1% of happening. If we make repeated independent trials looking for that event, each trial still gives a probability of at most 1% for that event to happen. Now, if we actively search for the case when that rare event actually happened among these many trials, we will succeed in finding it with probability much more than 1%.

 pouramini 03-06-2016 11:19 PM

Re: Hoeffding Inequality

please consider if I have the correct conclusions:

1- we cannot plug "g" for "h" in inequality, because it depends on the sample we already selected, or in other words, we choose it deliberately (as the h with lowest error inside D) like selecting the bin which has the minimum frequency of heads.

So! what if we select "g" randomly? (in a uniform distribution of hs ?) or to select a bin randomly, then can we use Hoeffding inequality for "g"? or still we should consider M, the H size?

2- which of the following interpretation for equation 1.6 are correct:
• The only function that has zero error inside and outside D is f, So if the number of hypothesis increases, then the chance to select "f" (the correct function, or better approximation) becomes lower. (however I feel its not what you say)
• Or maybe, when we increase the number of hypothesis, we increase the chance that data behave differently inside and outside the D! for example if we limit the hypothesis to one! we may have high error but we lower the difference between E(in) and E(out). For example if we use one feature, we have limited the number of hypothesis! then when we evaluate h outside D, its not flexible enough to show minor errors, then it is more close to E(in)?!
=====================================

Second question:

In "h is fixed before you generate the data set"
I also can't understand your emphasis on "before".

Do you want to say that h shouldn't change?
because I feel h is independent from D then "before" or "after" doesn't mean much. We don't need to have an h in mind to be able to generate D, we can select D, then decide which h to use, then evaluate h over D, but we should use the same h for the test set, right? or maybe h is used somehow in generating D?! Anyway, I think you may mean it should be selected independently from D

 ntvy95 03-07-2016 02:28 AM

Re: Hoeffding Inequality

I think you can take a look on MaciekLeks' post for the experiment result of the Exercise 1.10 (in the book).

In my understanding: g is the final hypothesis that is known after the data set is generated (because the choice of final hypothesis is based on the specific data set). Before the data set is generated, all the information that we know about g is that g is one of the hypotheses in H (hence the M). h is a specific hypothesis that is an element of H, and I don't think that we are selecting h, I think we are selecting g instead.

Quote:
 Originally Posted by pouramini (Post 12287) I also have the same questions, and I read your replies please consider if I have the correct conclusions: 1- we cannot plug "g" for "h" in inequality, because it depends on the sample we already selected, or in other words, we choose it deliberately (as the h with lowest error inside D) like selecting the bin which has the minimum frequency of heads. So! what if we select "g" randomly? (in a uniform distribution of hs ?) or to select a bin randomly, then can we use Hoeffding inequality for "g"? or still we should consider M, the H size? 2- which of the following interpretation for equation 1.6 are correct:The only function that has zero error inside and outside D is f, So if the number of hypothesis increases, then the chance to select "f" (the correct function, or better approximation) becomes lower. (however I feel its not what you say) Or maybe, when we increase the number of hypothesis, we increase the chance that data behave differently inside and outside the D! for example if we limit the hypothesis to one! we may have high error but we lower the difference between E(in) and E(out). For example if we use one feature, we have limited the number of hypothesis! then when we evaluate h outside D, its not flexible enough to show minor errors, then it is more close to E(in)?! ===================================== Second question: In "h is fixed before you generate the data set" I also can't understand your emphasis on "before". Do you want to say that h shouldn't change? because I feel h is independent from D then "before" or "after" doesn't mean much. We don't need to have an h in mind to be able to generate D, we can select D, then decide which h to use, then evaluate h over D, but we should use the same h for the test set, right? or maybe h is used somehow in generating D?! Anyway, I think you may mean it should be selected independently from D

All times are GMT -7. The time now is 05:31 AM.