LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 5

Reply
 
Thread Tools Display Modes
  #1  
Old 08-12-2012, 12:45 PM
munchkin munchkin is offline
Member
 
Join Date: Jul 2012
Posts: 38
Default How To Interpret Cross Entropy Error Training Versus Test?

A likelihood analysis for logistic regression yields an expression with (1/probability) and so the terminology for entropy can be applied. This I understand. What I'm not clear on is just what the calculated number means as far as in-sample versus out of sample performance. For the training data the weights calculated are supposed to minimize Ein(w) in accord with the likelihood viewpoint (the training data is the most likely given the final hypothesis if these weights are used). OK. But does it make any sense to compare the calculated Ein for the final weights with the Eout calculated by applying those weights to the test data set?

I appreciate any clarification that may be provided. Thanks.
Reply With Quote
  #2  
Old 08-12-2012, 03:51 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: How To Interpret Cross Entropy Error Training Versus Test?

Quote:
Originally Posted by munchkin View Post
A likelihood analysis for logistic regression yields an expression with (1/probability) and so the terminology for entropy can be applied. This I understand. What I'm not clear on is just what the calculated number means as far as in-sample versus out of sample performance. For the training data the weights calculated are supposed to minimize Ein(w) in accord with the likelihood viewpoint (the training data is the most likely given the final hypothesis if these weights are used). OK. But does it make any sense to compare the calculated Ein for the final weights with the Eout calculated by applying those weights to the test data set?
It makes sense because of generalization. For any error measure, E_{\rm out} tends to be close to E_{\rm in} under conditions similar to those in the VC analysis. For non-binary error measures, the analysis is more involved and the variance of the error measure plays a role, but the same priniciple applies.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 04:19 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.