LFD Book Forum  

Go Back   LFD Book Forum > Book Feedback - Learning From Data > Chapter 2 - Training versus Testing

Reply
 
Thread Tools Display Modes
  #1  
Old 09-30-2012, 11:17 PM
mileschen mileschen is offline
Member
 
Join Date: Sep 2012
Posts: 11
Default Questions on Problem 2.24

Though I have solved this problem, I still a little bit confusing.
(a) Eout. whether it is the test error Etest based on the test data set T, with size N, of a particular hypothesis g that's learnt from a particular training data set D (two points).
(b) Should the bias be computed based on the same test data set T? That is, bias = Ex[bias(x)] = 1/N * sum(bias(xi)) = 1/N * sum((g_x(xi) - f(xi))^2) for each xi in T, where g_x() is the average function.
(c) Should the var be computed based on the K data sets that learn the average function g_(x) and based on the test data set T? That is, var = Ex[var(x)] = 1/N * sum[1/k * sum((gk(xi) - g_x(xi))^2)].

for Eout, bias, and var, should the be computed based on the same test data set?
Reply With Quote
  #2  
Old 10-01-2012, 05:10 AM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: Questions on Problem 2.24

(a) For this problem if you are given a linear hypothesis it should be possible to analytically compute E_{out}. However, if you computed it on a test set T, it is fine.

(b) Yes. It is also true that Etest=bias+var. Why? (because we showed this for every x).

(c) The var is computed using the same data sets on which you learned and computed the average function. The average variance is computed over the distribution of the inputs. In the case you a test set, the average is taken over the test set. Just like bias(x), var(x) is also a function of x that captures how variable your prediction is at a point x. You take all your predictions on x learned from different data sets and compute the variance of those (just like you take the average of those to get the average function.


Remember that the only purpose of the test set or the input distribution P(x) is to compute an average over (x) of all these quantities. If you had a single test point as discussed in class, everything works there too.


Quote:
Originally Posted by mileschen View Post
Though I have solved this problem, I still a little bit confusing.
(a) Eout. whether it is the test error Etest based on the test data set T, with size N, of a particular hypothesis g that's learnt from a particular training data set D (two points).
(b) Should the bias be computed based on the same test data set T? That is, bias = Ex[bias(x)] = 1/N * sum(bias(xi)) = 1/N * sum((g_x(xi) - f(xi))^2) for each xi in T, where g_x() is the average function.
(c) Should the var be computed based on the K data sets that learn the average function g_(x) and based on the test data set T? That is, var = Ex[var(x)] = 1/N * sum[1/k * sum((gk(xi) - g_x(xi))^2)].

for Eout, bias, and var, should the be computed based on the same test data set?
__________________
Have faith in probability
Reply With Quote
  #3  
Old 10-01-2012, 06:34 AM
mileschen mileschen is offline
Member
 
Join Date: Sep 2012
Posts: 11
Default Re: Questions on Problem 2.24

I still have some questions.
var = Ex[var(x)], but var(x) = Ed[(gk(x) - g_(x))^x], where var(x) is computed based on the K data sets that learnt the average function g_(x). Then, how to compute var, which is a expected value of var(x)?

If var is computed on the same data sets that learnt the average function. Then, how to compute bias = Ex[bias(x)]? If still be computed in the same data set that learnt the average function?
Reply With Quote
  #4  
Old 10-01-2012, 07:40 AM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: Questions on Problem 2.24

The point x has nothing to do with the data sets on which you learn. Fix any point x.

You can now compute M1=Ed[gk(x)].

You can also compute M2=Ed[gk(x)^2].

M1 and M2 are just two numbers which apply to the point x. Clearly M1 and M2 will change if you change x, so M1 and M2 are functions of x

\bar g(x)=M1

var (x)=M2-M1^2

Now, for example, if you have many x's (eg a test set) you can compute the average of \bar g(x) and var (x) over those x's. This means you have to compute M1 and M2 for each of those x's. You can use the same learning data sets to do so.

Quote:
Originally Posted by mileschen View Post
I still have some questions.
var = Ex[var(x)], but var(x) = Ed[(gk(x) - g_(x))^x], where var(x) is computed based on the K data sets that learnt the average function g_(x). Then, how to compute var, which is a expected value of var(x)?

If var is computed on the same data sets that learnt the average function. Then, how to compute bias = Ex[bias(x)]? If still be computed in the same data set that learnt the average function?
__________________
Have faith in probability
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 03:32 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.