LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 1 - The Learning Problem (http://book.caltech.edu/bookforum/forumdisplay.php?f=108)
-   -   Hoeffding inequality for multiple hypothesis (http://book.caltech.edu/bookforum/showthread.php?t=4582)

kostya3312 01-26-2015 12:20 PM

Hoeffding inequality for multiple hypothesis
 
It's clear for me how inequality works for each hypothesis separately. But I don't understand why we need Hoeffding inequality for multiple hypothesis. If i have training data set of size 'N' then (for fixed tolerance 'e') Hoeffding upper bound is determined for each hypoyhesis. The only thing that remains is to find hypothesis with minimal in-sample rate. Why do we need to consider all hypothesis simultaneously? What information gives us Hoeffding inequality with factor 'M' in it? I undetstand example with coins but I can not relate it to learning problem.

Sorry for my english and thanks.

magdon 01-27-2015 03:46 PM

Re: Hoeffding inequality for multiple hypothesis
 
Hoeffding for a single hypothesis h_1 tells you that, with high probability,

|E_{in}(h_1)-E_{out}(h_1)|<\epsilon.

As you point out, "The only thing that remains is to find hypothesis with minimal in-sample rate." Why would one do this? Because one is confident that Ein is close to Eout for every hypothesis, and so if we find the the hypothesis with minimum Ein, it will likely have minimum Eout. So, to be justified in picking the hypothesis with minimum Ein, we require that

\forall h_i, |E_{in}(h_i)-E_{out}(h_i)|\le\epsilon.

Equivalently,

for no h_i, |E_{in}(h_i)-E_{out}(h_i)|>\epsilon.

The factor of M comes from using the union bound

P[for\ no\ h_i, |E_{in}(h_i)-E_{out}(h_i)|>\epsilon]\le P[|E_{in}(h_1)-E_{out}(h_1)|>\epsilon]+P[|E_{in}(h_2)-E_{out}(h_2)|>\epsilon]+\cdots.


Quote:

Originally Posted by kostya3312 (Post 11910)
It's clear for me how inequality works for each hypothesis separately. But I don't understand why we need Hoeffding inequality for multiple hypothesis. If i have training data set of size 'N' then (for fixed tolerance 'e') Hoeffding upper bound is determined for each hypoyhesis. The only thing that remains is to find hypothesis with minimal in-sample rate. Why do we need to consider all hypothesis simultaneously? What information gives us Hoeffding inequality with factor 'M' in it? I undetstand example with coins but I can not relate it to learning problem.

Sorry for my english and thanks.


kostya3312 01-29-2015 07:18 AM

Re: Hoeffding inequality for multiple hypothesis
 
Thank you, Professor!

I do not quite understand the following:
Quote:

Originally Posted by magdon (Post 11911)
The factor of M comes from using the union bound

P[for\ no\ h_i, |E_{in}(h_i)-E_{out}(h_i)|>\epsilon]\le P[|E_{in}(h_1)-E_{out}(h_1)|>\epsilon]+P[|E_{in}(h_2)-E_{out}(h_2)|>\epsilon]+\cdots.

I thought that the goal is to get the upper bound for probability of event A = [for\ AT\ LEAST\ one\ hypothesis\ |E_{in}-E_{out}|>\epsilon]. That is, for feasibility of learning the probability of this event should be small. In my opinion two events A (mine) and B = [for\ no\ h_i, |E_{in}-E_{out}|>\epsilon] (yours) are different events. Am I right?

My last question is as follows. The LHS of Hoeffding inequality for M hypothesis is P[|E_{in}(g)-E_{out}(g)|>\epsilon]. It implies that event C = |E_{in}(g)-E_{out}(g)|>\epsilon and event A (event B if you are right) are equal. Though I understand the meaning of event A the meaning of event C isn't so clear for me. What it literally means? I think it means [absolute\ difference\ between\ E_{in}\ and\ E_{out}\ for\ final\ hypothesis\ g\ is\ greater\ than\ \epsilon]. Am I right?

magdon 02-19-2015 11:38 AM

Re: Hoeffding inequality for multiple hypothesis
 
Sorry, there was a typo in my previous message. Yes they are different events. But they are very related events.



A = [at\ least\ one\ h_i, |E_{in}-E_{out}|>\epsilon]

B = [for\ no\ h_i, |E_{in}-E_{out}|>\epsilon]

P[B]=1-P[A]>=1-M*...

Quote:

Originally Posted by kostya3312 (Post 11916)
Thank you, Professor!

I do not quite understand the following:


I thought that the goal is to get the upper bound for probability of event A = [for\ AT\ LEAST\ one\ hypothesis\ |E_{in}-E_{out}|>\epsilon]. That is, for feasibility of learning the probability of this event should be small. In my opinion two events A (mine) and B = [for\ no\ h_i, |E_{in}-E_{out}|>\epsilon] (yours) are different events. Am I right?

My last question is as follows. The LHS of Hoeffding inequality for M hypothesis is P[|E_{in}(g)-E_{out}(g)|>\epsilon]. It implies that event C = |E_{in}(g)-E_{out}(g)|>\epsilon and event A (event B if you are right) are equal. Though I understand the meaning of event A the meaning of event C isn't so clear for me. What it literally means? I think it means [absolute\ difference\ between\ E_{in}\ and\ E_{out}\ for\ final\ hypothesis\ g\ is\ greater\ than\ \epsilon]. Am I right?



All times are GMT -7. The time now is 03:43 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.