LFD Book Forum  

Go Back   LFD Book Forum > Book Feedback - Learning From Data > Chapter 1 - The Learning Problem

Reply
 
Thread Tools Display Modes
  #1  
Old 10-21-2014, 05:11 PM
mahaitao mahaitao is offline
Junior Member
 
Join Date: Oct 2014
Posts: 6
Default Exercise 1.13 noisy targets

Exercise 1.13(a): what is the probability of error that h makes in approximating y if we use a noisy version of f. That means we want to compute Pr(h(x)~=y), and I consider two cases:
(1) h(x)=f(x) and f(x) != y; [(1-\mu)*\(1-\lambda)]
(2) h(x)!=f(x) and f(x) = y. [\mu*\lambda]
I am not sure the solution is right. My questions are follows:
(i) Does "h makes an error with \mu in approximating a deterministic target function f" mean Pr(h(x) != f(x)) = \mu?
(ii) Does the probability of Pr(h(x)~=y)=Pr(1)+Pr(2)?

Exercise 1.13(b) : I am not clear what does "performance of h be independent of \mu" mean? Should I consider Pr(h(x)~=y)?

thanks!
Reply With Quote
  #2  
Old 10-21-2014, 11:20 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,472
Default Re: Exercise 1.13 noisy targets

Quote:
Originally Posted by mahaitao View Post
Exercise 1.13(a): what is the probability of error that h makes in approximating y if we use a noisy version of f. That means we want to compute Pr(h(x)~=y), and I consider two cases:
(1) h(x)=f(x) and f(x) != y; [(1-\mu)*\(1-\lambda)]
(2) h(x)!=f(x) and f(x) = y. [\mu*\lambda]
I am not sure the solution is right. My questions are follows:
(i) Does "h makes an error with \mu in approximating a deterministic target function f" mean Pr(h(x) != f(x)) = \mu?
(ii) Does the probability of Pr(h(x)~=y)=Pr(1)+Pr(2)?

Exercise 1.13(b) : I am not clear what does "performance of h be independent of \mu" mean? Should I consider Pr(h(x)~=y)?

thanks!
Answering your questions (i) and (ii): Yes and yes.

In Exercise 1.13(b): Independent of \mu means that changing the value of \mu does not affect how well h({\bf x}) predicts y.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #3  
Old 10-22-2014, 04:45 PM
mahaitao mahaitao is offline
Junior Member
 
Join Date: Oct 2014
Posts: 6
Default Re: Exercise 1.13 noisy targets

Thank you very much, professor.
Reply With Quote
  #4  
Old 08-06-2015, 04:36 AM
prithagupta.nsit prithagupta.nsit is offline
Junior Member
 
Join Date: Jun 2015
Posts: 7
Default Re: Exercise 1.13 noisy targets

SO final Probability of error that h makes in approximating y would be:
1+2*lamda*mu -mu -lamda.

if it should be independent of mu then lamda should be 1/2
1+2*1/2*mu -mu -lamda =1-lamda =1/2

It think this should be correct answer.

Is my understanding correct for second part of the question ?
Reply With Quote
  #5  
Old 08-06-2015, 04:02 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,472
Default Re: Exercise 1.13 noisy targets

Correct.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #6  
Old 05-12-2016, 02:24 AM
elyoum elyoum is offline
Junior Member
 
Join Date: May 2016
Posts: 3
Default Re: Exercise 1.13 noisy targets

Quote:
Originally Posted by yaser View Post
Correct.
can i ask you some questions please?
Reply With Quote
  #7  
Old 10-09-2017, 05:25 PM
Vladimir Vladimir is offline
Junior Member
 
Join Date: Oct 2017
Posts: 1
Default Re: Exercise 1.13 noisy targets

Dear Professor,

What about the case h(x)!=f(x) and f(x) != y? Does it count on the probability of Pr(h(x)~=y)?

Thanks.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 04:41 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.