![]() |
#1
|
|||
|
|||
![]()
The Hoeffding bound for the model H in chapter one, only requires that
we make the assumption that the input examples are a random sample from the bin; so we can generalize the sample error. What role does the distribution on X play? It appears to me that we don't need it. (at least the way the issue of feasibility is setup in chapter 1) ie. true mismatch ~ sample mismatch. Thanks. |
#2
|
||||
|
||||
![]() Quote:
![]() ![]() ![]() ![]()
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
So can you say that P(X) populates the bin and determines mu? In that case we would be sampling P(X); is this correct?
|
#4
|
|||
|
|||
![]()
I see.
Example: ![]() ![]() If we let 1. ![]() or 2. ![]() I know from my stat classes that in case 1. a linear model is actually "correct". (this is great since we usually know nothing about f) So in this case the distribution of X plays a role in selecting H, and hence reducing the in sample error. (assuming the quadratic loss fct.) Questions: So in either case 1. or 2. the interpretation/computation of the sample error is the same? I am a little confused since the overall true error (which we hope the sample error approximates) is defined based on the joint distribution of (X,Y); which depends on the distribution of X. Thanks. I hope this class/book can clear up some mis-conceptions about the theoretical framework of the learning problem once and for all ![]() |
#5
|
|||
|
|||
![]() Quote:
[This may appear to be a trivial assumption when sampling from some populations, but it is likely to be non-trivial in many cases where we are attempting to infer future behavior from past behavior in a system whose characteristics may change] |
#6
|
||||
|
||||
![]() Quote:
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
__________________
Where everyone thinks alike, no one thinks very much |
#7
|
|||
|
|||
![]() Quote:
Thanks prof Yaser's reply. A quick question, as we are sampling according to ![]() ![]() ![]() ![]() ![]() |
#8
|
||||
|
||||
![]() Quote:
![]()
__________________
Where everyone thinks alike, no one thinks very much |
#9
|
|||
|
|||
![]() Quote:
But isn't ![]() Please clarify. Thanks, Giridhar. |
#10
|
||||
|
||||
![]() Quote:
![]() ![]() To take a simple example, Let's say that there are only two marbles in the bin, one red and one green, but the red marble has a higher probability of being picked than the green marble. In this case, ![]()
__________________
Where everyone thinks alike, no one thinks very much |
![]() |
Thread Tools | |
Display Modes | |
|
|