LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 1 - The Learning Problem (http://book.caltech.edu/bookforum/forumdisplay.php?f=108)
-   -   Exercise 1.13 noisy targets (http://book.caltech.edu/bookforum/showthread.php?t=4529)

mahaitao 10-21-2014 06:11 PM

Exercise 1.13 noisy targets
 
Exercise 1.13(a): what is the probability of error that h makes in approximating y if we use a noisy version of f. That means we want to compute Pr(h(x)~=y), and I consider two cases:
(1) h(x)=f(x) and f(x) != y; [(1-\mu)*\(1-\lambda)]
(2) h(x)!=f(x) and f(x) = y. [\mu*\lambda]
I am not sure the solution is right. My questions are follows:
(i) Does "h makes an error with \mu in approximating a deterministic target function f" mean Pr(h(x) != f(x)) = \mu?
(ii) Does the probability of Pr(h(x)~=y)=Pr(1)+Pr(2)?

Exercise 1.13(b) : I am not clear what does "performance of h be independent of \mu" mean? Should I consider Pr(h(x)~=y)?

thanks!

yaser 10-22-2014 12:20 AM

Re: Exercise 1.13 noisy targets
 
Quote:

Originally Posted by mahaitao (Post 11784)
Exercise 1.13(a): what is the probability of error that h makes in approximating y if we use a noisy version of f. That means we want to compute Pr(h(x)~=y), and I consider two cases:
(1) h(x)=f(x) and f(x) != y; [(1-\mu)*\(1-\lambda)]
(2) h(x)!=f(x) and f(x) = y. [\mu*\lambda]
I am not sure the solution is right. My questions are follows:
(i) Does "h makes an error with \mu in approximating a deterministic target function f" mean Pr(h(x) != f(x)) = \mu?
(ii) Does the probability of Pr(h(x)~=y)=Pr(1)+Pr(2)?

Exercise 1.13(b) : I am not clear what does "performance of h be independent of \mu" mean? Should I consider Pr(h(x)~=y)?

thanks!

Answering your questions (i) and (ii): Yes and yes.

In Exercise 1.13(b): Independent of \mu means that changing the value of \mu does not affect how well h({\bf x}) predicts y.

mahaitao 10-22-2014 05:45 PM

Re: Exercise 1.13 noisy targets
 
Thank you very much, professor.

prithagupta.nsit 08-06-2015 05:36 AM

Re: Exercise 1.13 noisy targets
 
SO final Probability of error that h makes in approximating y would be:
1+2*lamda*mu -mu -lamda.

if it should be independent of mu then lamda should be 1/2
1+2*1/2*mu -mu -lamda =1-lamda =1/2

It think this should be correct answer.

Is my understanding correct for second part of the question ?

yaser 08-06-2015 05:02 PM

Re: Exercise 1.13 noisy targets
 
Correct. :)

elyoum 05-12-2016 03:24 AM

Re: Exercise 1.13 noisy targets
 
Quote:

Originally Posted by yaser (Post 11995)
Correct. :)

can i ask you some questions please?

Vladimir 10-09-2017 06:25 PM

Re: Exercise 1.13 noisy targets
 
Dear Professor,

What about the case h(x)!=f(x) and f(x) != y? Does it count on the probability of Pr(h(x)~=y)?

Thanks.

don slowik 11-14-2017 03:52 PM

Re: Exercise 1.13 noisy targets
 
The case you mention would lead to h(x) = y.

Ulyssesyang 11-09-2018 04:40 AM

Re: Exercise 1.13 noisy targets
 
Quote:

Originally Posted by mahaitao (Post 11784)
Exercise 1.13(a): what is the probability of error that h makes in approximating y if we use a noisy version of f. That means we want to compute Pr(h(x)~=y), and I consider two cases:
(1) h(x)=f(x) and f(x) != y; [(1-\mu)*\(1-\lambda)]
(2) h(x)!=f(x) and f(x) = y. [\mu*\lambda]
I am not sure the solution is right. My questions are follows:
(i) Does "h makes an error with \mu in approximating a deterministic target function f" mean Pr(h(x) != f(x)) = \mu?
(ii) Does the probability of Pr(h(x)~=y)=Pr(1)+Pr(2)?

Exercise 1.13(b) : I am not clear what does "performance of h be independent of \mu" mean? Should I consider Pr(h(x)~=y)?

thanks!

So why don’t you consider h(x)!=f(x) and f(x) != y? Even if there is some case here h(x) may equal to y, but we still have case here h(x)!=y.

ckong41 05-11-2021 07:00 PM

Re: Exercise 1.13 noisy targets
 
Was wondering if the following intuitive approach works for part (b). It's as far as I'm getting thus far.

The question is asking, "What accuracy setting of f (in its prediction of noisy target y) would render h's accuracy (in its prediction of target distribution f) inconsequential?"

If f got everything wrong, that is, lamb=0: h's ability to model f matters. The worse it does on f, the better it does on y.
If f got everything right, that is, lamb=1: h's ability to model f matters. The better it does by f, the better it does on y.
If f got 50%, that is, lamb=0.5: h's ability to model f would not matter. For this, considering the two remaining quartiles:
If h modeled f 75% of the time, that's 75% of 50% and 25% of 50% = (.75*.5)+(.25*.5) = 0.5
If h modeled f 25% of the time, that's 25% of 50% and 75% of 50% = (.75*.5)+(.25*.5) = 0.5
General case: if h models f with probability (1-mu), then h models y with probability (1-mu)*0.5 + mu*0.5 = (1-mu+mu)*0.5 = 0.5.


All times are GMT -7. The time now is 08:19 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.