My understanding is that the text you quoted is talking about learning when the target function has noise.

Because the target function has noise, so given an input x, f(x) doesn't always give y.

Hence, in this case, if we want to apply machine learning, we want to conclude what the probability of y given x as the input -- i.e. P(y|x).

Hence, P(x) isn't used for creating training set. P(x) is just talking about the distribution of x.

"P(x) only quantifies the relative importance of the point x in gauging how well we have learned"

For instance, if P(x1) is very small, we can't say that we learn very very well when P(y1|x1) is close to 1. Because there are x2, x3, ... that they might appear more frequent than x1 (e.g. P(x2) is much greater than P(x1)). When P(y1|x1) is close to 1, we can only say that we learn very well about how x1 is used to predict y1. But we can't say anything about x2, x3....given P(x1) is relatively small.

Hope I don't confuse you more.

If any of my statement is flaw, I appreciate anyone's correction