LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 2 - Training versus Testing (http://book.caltech.edu/bookforum/forumdisplay.php?f=109)
-   -   Questions on Problem 2.24 (http://book.caltech.edu/bookforum/showthread.php?t=1877)

mileschen 10-01-2012 12:17 AM

Questions on Problem 2.24
 
Though I have solved this problem, I still a little bit confusing.
(a) Eout. whether it is the test error Etest based on the test data set T, with size N, of a particular hypothesis g that's learnt from a particular training data set D (two points).
(b) Should the bias be computed based on the same test data set T? That is, bias = Ex[bias(x)] = 1/N * sum(bias(xi)) = 1/N * sum((g_x(xi) - f(xi))^2) for each xi in T, where g_x() is the average function.
(c) Should the var be computed based on the K data sets that learn the average function g_(x) and based on the test data set T? That is, var = Ex[var(x)] = 1/N * sum[1/k * sum((gk(xi) - g_x(xi))^2)].

for Eout, bias, and var, should the be computed based on the same test data set?

magdon 10-01-2012 06:10 AM

Re: Questions on Problem 2.24
 
(a) For this problem if you are given a linear hypothesis it should be possible to analytically compute E_{out}. However, if you computed it on a test set T, it is fine.

(b) Yes. It is also true that Etest=bias+var. Why? (because we showed this for every x).

(c) The var is computed using the same data sets on which you learned and computed the average function. The average variance is computed over the distribution of the inputs. In the case you a test set, the average is taken over the test set. Just like bias(x), var(x) is also a function of x that captures how variable your prediction is at a point x. You take all your predictions on x learned from different data sets and compute the variance of those (just like you take the average of those to get the average function.


Remember that the only purpose of the test set or the input distribution P(x) is to compute an average over (x) of all these quantities. If you had a single test point as discussed in class, everything works there too.


Quote:

Originally Posted by mileschen (Post 5972)
Though I have solved this problem, I still a little bit confusing.
(a) Eout. whether it is the test error Etest based on the test data set T, with size N, of a particular hypothesis g that's learnt from a particular training data set D (two points).
(b) Should the bias be computed based on the same test data set T? That is, bias = Ex[bias(x)] = 1/N * sum(bias(xi)) = 1/N * sum((g_x(xi) - f(xi))^2) for each xi in T, where g_x() is the average function.
(c) Should the var be computed based on the K data sets that learn the average function g_(x) and based on the test data set T? That is, var = Ex[var(x)] = 1/N * sum[1/k * sum((gk(xi) - g_x(xi))^2)].

for Eout, bias, and var, should the be computed based on the same test data set?


mileschen 10-01-2012 07:34 AM

Re: Questions on Problem 2.24
 
I still have some questions.
var = Ex[var(x)], but var(x) = Ed[(gk(x) - g_(x))^x], where var(x) is computed based on the K data sets that learnt the average function g_(x). Then, how to compute var, which is a expected value of var(x)?

If var is computed on the same data sets that learnt the average function. Then, how to compute bias = Ex[bias(x)]? If still be computed in the same data set that learnt the average function?

magdon 10-01-2012 08:40 AM

Re: Questions on Problem 2.24
 
The point x has nothing to do with the data sets on which you learn. Fix any point x.

You can now compute M1=Ed[gk(x)].

You can also compute M2=Ed[gk(x)^2].

M1 and M2 are just two numbers which apply to the point x. Clearly M1 and M2 will change if you change x, so M1 and M2 are functions of x

\bar g(x)=M1

var (x)=M2-M1^2

Now, for example, if you have many x's (eg a test set) you can compute the average of \bar g(x) and var (x) over those x's. This means you have to compute M1 and M2 for each of those x's. You can use the same learning data sets to do so.

Quote:

Originally Posted by mileschen (Post 5977)
I still have some questions.
var = Ex[var(x)], but var(x) = Ed[(gk(x) - g_(x))^x], where var(x) is computed based on the K data sets that learnt the average function g_(x). Then, how to compute var, which is a expected value of var(x)?

If var is computed on the same data sets that learnt the average function. Then, how to compute bias = Ex[bias(x)]? If still be computed in the same data set that learnt the average function?



All times are GMT -7. The time now is 04:38 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.