View Single Post
  #3  
Old 10-20-2018, 09:31 AM
htlin's Avatar
htlin htlin is offline
NTU
 
Join Date: Aug 2009
Location: Taipei, Taiwan
Posts: 601
Default Re: Important details missing

Quote:
Originally Posted by stnvntngrn View Post
No one seems to have answered this. I figured I would clear this up a bit so that possible future readers do not get the wrong impression.

Let me skip the first point for now.

Regarding getting (3.8) out of the information on page 91, this comes about exactly as the authors pretty much spell out. When y=1, we want to get h(x) = theta(wt x) = theta(y wt x) since y =1. When y=-1, we want to get 1-h(x) = 1- theta(wt x) = theta(- wt x) by the properties of the function theta, and finally we can write this as =theta(y wt x) since y =-1. This is nice because we can now combine the two cases in one formula.

Regarding the second point, the authors write that they can "equivalently minimize a more convenient quantity", taking a logarithm and putting a 1/N in front. If I want to minimize A, I might as well minimize 2A or 1/2 A, hence the 1/N is not a problem. It is just there for "convenience" (e.g. the typical scale of the resulting numbers), as the authors state.

Regarding the first point I have nothing meaningful to say, as I know next to nothing about the subject, but based on the other two points I would take the criticism expressed here by TLMFQS with a rather large grain of salt.
Thanks for the clarification. Regarding the first point, my personal opinion is that regression in the statistics community (which does focus on double-checking the validity of the assumptions with residual analysis) is not fully the same as regression in the machine learning community. Given the difference in focus, it is possible that the necessary materials for different communities are different.
__________________
When one teaches, two learn.
Reply With Quote