LFD Book Forum  

Go Back   LFD Book Forum > Book Feedback - Learning From Data > Chapter 1 - The Learning Problem

Reply
 
Thread Tools Display Modes
  #1  
Old 06-06-2016, 04:00 AM
pouramini pouramini is offline
Member
 
Join Date: Mar 2016
Posts: 16
Default P(x) vs. P(y |x)

The book says:

Quote:
While both distributions model probabilistic aspects
of x and y, the target distribution P(y | x) is what we are trying to learn,
while the input distribution P(x) only quantifies the relative importance of
the point x in gauging how well we have learned.
I didn't get it! what is actually the difference? and specially the usage of P(x).. My English is not well!

Does it mean P(x) is only used in creating training and test set? and it is used in the estimate provided by test set of E_out?
Reply With Quote
  #2  
Old 06-06-2016, 05:35 AM
henry2015 henry2015 is offline
Member
 
Join Date: Aug 2015
Posts: 29
Default Re: P(x) vs. P(y |x)

P(x) = probability of x
P(y|x) = probability of y given x has happened already
Reply With Quote
  #3  
Old 06-06-2016, 09:16 AM
pouramini pouramini is offline
Member
 
Join Date: Mar 2016
Posts: 16
Default Re: P(x) vs. P(y |x)

Quote:
Originally Posted by henry2015 View Post
P(x) = probability of x
P(y|x) = probability of y given x has happened already
I know, but what they mean in learning! does my conclusion in the initial post correct?
Reply With Quote
  #4  
Old 06-06-2016, 09:20 PM
henry2015 henry2015 is offline
Member
 
Join Date: Aug 2015
Posts: 29
Default Re: P(x) vs. P(y |x)

My understanding is that the text you quoted is talking about learning when the target function has noise.

Because the target function has noise, so given an input x, f(x) doesn't always give y.

Hence, in this case, if we want to apply machine learning, we want to conclude what the probability of y given x as the input -- i.e. P(y|x).

Hence, P(x) isn't used for creating training set. P(x) is just talking about the distribution of x.

"P(x) only quantifies the relative importance of the point x in gauging how well we have learned"

For instance, if P(x1) is very small, we can't say that we learn very very well when P(y1|x1) is close to 1. Because there are x2, x3, ... that they might appear more frequent than x1 (e.g. P(x2) is much greater than P(x1)). When P(y1|x1) is close to 1, we can only say that we learn very well about how x1 is used to predict y1. But we can't say anything about x2, x3....given P(x1) is relatively small.

Hope I don't confuse you more.

If any of my statement is flaw, I appreciate anyone's correction
Reply With Quote
  #5  
Old 06-08-2016, 12:34 PM
pouramini pouramini is offline
Member
 
Join Date: Mar 2016
Posts: 16
Default Re: P(x) vs. P(y |x)

Thank you! however I still need the author or another one clarify the sentence more... when he says "gauging how well we learned", I think he speaks about the test set.

We also know that we should avoid sampling bias. Then in my opinion P(x) is used in training set to make it unbiased, not?

and we know any distribution we used in training set we should use in test set, then P(x) is used in the test set too.

However I still don't know if we know P(x) or not, is it known?!
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 07:36 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.