LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 1 - The Learning Problem (http://book.caltech.edu/bookforum/forumdisplay.php?f=108)
-   -   Exercise 1.13 noisy targets (http://book.caltech.edu/bookforum/showthread.php?t=4529)

mahaitao 10-21-2014 06:11 PM

Exercise 1.13 noisy targets
 
Exercise 1.13(a): what is the probability of error that h makes in approximating y if we use a noisy version of f. That means we want to compute Pr(h(x)~=y), and I consider two cases:
(1) h(x)=f(x) and f(x) != y; [(1-\mu)*\(1-\lambda)]
(2) h(x)!=f(x) and f(x) = y. [\mu*\lambda]
I am not sure the solution is right. My questions are follows:
(i) Does "h makes an error with \mu in approximating a deterministic target function f" mean Pr(h(x) != f(x)) = \mu?
(ii) Does the probability of Pr(h(x)~=y)=Pr(1)+Pr(2)?

Exercise 1.13(b) : I am not clear what does "performance of h be independent of \mu" mean? Should I consider Pr(h(x)~=y)?

thanks!

yaser 10-22-2014 12:20 AM

Re: Exercise 1.13 noisy targets
 
Quote:

Originally Posted by mahaitao (Post 11784)
Exercise 1.13(a): what is the probability of error that h makes in approximating y if we use a noisy version of f. That means we want to compute Pr(h(x)~=y), and I consider two cases:
(1) h(x)=f(x) and f(x) != y; [(1-\mu)*\(1-\lambda)]
(2) h(x)!=f(x) and f(x) = y. [\mu*\lambda]
I am not sure the solution is right. My questions are follows:
(i) Does "h makes an error with \mu in approximating a deterministic target function f" mean Pr(h(x) != f(x)) = \mu?
(ii) Does the probability of Pr(h(x)~=y)=Pr(1)+Pr(2)?

Exercise 1.13(b) : I am not clear what does "performance of h be independent of \mu" mean? Should I consider Pr(h(x)~=y)?

thanks!

Answering your questions (i) and (ii): Yes and yes.

In Exercise 1.13(b): Independent of \mu means that changing the value of \mu does not affect how well h({\bf x}) predicts y.

mahaitao 10-22-2014 05:45 PM

Re: Exercise 1.13 noisy targets
 
Thank you very much, professor.

prithagupta.nsit 08-06-2015 05:36 AM

Re: Exercise 1.13 noisy targets
 
SO final Probability of error that h makes in approximating y would be:
1+2*lamda*mu -mu -lamda.

if it should be independent of mu then lamda should be 1/2
1+2*1/2*mu -mu -lamda =1-lamda =1/2

It think this should be correct answer.

Is my understanding correct for second part of the question ?

yaser 08-06-2015 05:02 PM

Re: Exercise 1.13 noisy targets
 
Correct. :)

elyoum 05-12-2016 03:24 AM

Re: Exercise 1.13 noisy targets
 
Quote:

Originally Posted by yaser (Post 11995)
Correct. :)

can i ask you some questions please?

Vladimir 10-09-2017 06:25 PM

Re: Exercise 1.13 noisy targets
 
Dear Professor,

What about the case h(x)!=f(x) and f(x) != y? Does it count on the probability of Pr(h(x)~=y)?

Thanks.

don slowik 11-14-2017 03:52 PM

Re: Exercise 1.13 noisy targets
 
The case you mention would lead to h(x) = y.

Ulyssesyang 11-09-2018 04:40 AM

Re: Exercise 1.13 noisy targets
 
Quote:

Originally Posted by mahaitao (Post 11784)
Exercise 1.13(a): what is the probability of error that h makes in approximating y if we use a noisy version of f. That means we want to compute Pr(h(x)~=y), and I consider two cases:
(1) h(x)=f(x) and f(x) != y; [(1-\mu)*\(1-\lambda)]
(2) h(x)!=f(x) and f(x) = y. [\mu*\lambda]
I am not sure the solution is right. My questions are follows:
(i) Does "h makes an error with \mu in approximating a deterministic target function f" mean Pr(h(x) != f(x)) = \mu?
(ii) Does the probability of Pr(h(x)~=y)=Pr(1)+Pr(2)?

Exercise 1.13(b) : I am not clear what does "performance of h be independent of \mu" mean? Should I consider Pr(h(x)~=y)?

thanks!

So why don’t you consider h(x)!=f(x) and f(x) != y? Even if there is some case here h(x) may equal to y, but we still have case here h(x)!=y.


All times are GMT -7. The time now is 02:34 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.