Thread: Hw5 Q8 E_out
View Single Post
Old 05-06-2013, 11:14 AM
yaser's Avatar
yaser yaser is offline
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,478
Default Re: Hw5 Q8 E_out

Originally Posted by arcticblue View Post
I am also a little unsure about exactly how this equation works:
E_{out} = \frac{1}{M} \sum_{i=1}^M \ln (1+e^{-Y_i w^\top X_i})

Obviously the more negative {-Y_i w^\top X_i} is the closer E_out is to zero which is good. So is w supposed to be normalized? I presume so because otherwise I could just scale w and then E_out becomes very small. And if it is normalized then the values I'm getting for E_in and E_out are both much greater than any of the options. (Maybe it's meant to be like that, if so it's quite unnerving.)
No normalization. The value of {\bf w} is determined iteratively by the specific algorithm given in the lecture. If {\bf w} 'agrees' with all the training examples, then indeed the algorithm will try to scale it up to get the value of the logistic function closer to a hard threshold. When you evaluate the quoted formula on a test set, {\bf w} is frozen and no scaling or any other change in it is allowed.
Where everyone thinks alike, no one thinks very much
Reply With Quote