LFD Book Forum  

Go Back   LFD Book Forum > Book Feedback - Learning From Data > Chapter 3 - The Linear Model

Reply
 
Thread Tools Display Modes
  #1  
Old 10-11-2017, 02:42 AM
TLMFQS TLMFQS is offline
Junior Member
 
Join Date: Oct 2017
Posts: 2
Default Important details missing

This book is missing a lot of very important details.

First of all, it doesn't even mention the necessity of residual analysis of the errors, and how regression is useless unless that fundamental element is checked.

Second, on page 91 how does equation/function 3.8 come about with the information provided?

And right below that, why does the function that is minimizes the ML have a
-(1/N)? As in, where did this (1/N) come from? We can go from a product to a sum of the logs, but we can't just add a 1/N and use the words "We can equivalently ..." to describe the transition.

Can someone clarify these issues?
Reply With Quote
  #2  
Old 10-15-2018, 05:39 AM
stnvntngrn stnvntngrn is offline
Junior Member
 
Join Date: Sep 2018
Posts: 6
Default Re: Important details missing

No one seems to have answered this. I figured I would clear this up a bit so that possible future readers do not get the wrong impression.

Let me skip the first point for now.

Regarding getting (3.8) out of the information on page 91, this comes about exactly as the authors pretty much spell out. When y=1, we want to get h(x) = theta(wt x) = theta(y wt x) since y =1. When y=-1, we want to get 1-h(x) = 1- theta(wt x) = theta(- wt x) by the properties of the function theta, and finally we can write this as =theta(y wt x) since y =-1. This is nice because we can now combine the two cases in one formula.

Regarding the second point, the authors write that they can "equivalently minimize a more convenient quantity", taking a logarithm and putting a 1/N in front. If I want to minimize A, I might as well minimize 2A or 1/2 A, hence the 1/N is not a problem. It is just there for "convenience" (e.g. the typical scale of the resulting numbers), as the authors state.

Regarding the first point I have nothing meaningful to say, as I know next to nothing about the subject, but based on the other two points I would take the criticism expressed here by TLMFQS with a rather large grain of salt.
Reply With Quote
  #3  
Old 10-20-2018, 09:31 AM
htlin's Avatar
htlin htlin is offline
NTU
 
Join Date: Aug 2009
Location: Taipei, Taiwan
Posts: 587
Default Re: Important details missing

Quote:
Originally Posted by stnvntngrn View Post
No one seems to have answered this. I figured I would clear this up a bit so that possible future readers do not get the wrong impression.

Let me skip the first point for now.

Regarding getting (3.8) out of the information on page 91, this comes about exactly as the authors pretty much spell out. When y=1, we want to get h(x) = theta(wt x) = theta(y wt x) since y =1. When y=-1, we want to get 1-h(x) = 1- theta(wt x) = theta(- wt x) by the properties of the function theta, and finally we can write this as =theta(y wt x) since y =-1. This is nice because we can now combine the two cases in one formula.

Regarding the second point, the authors write that they can "equivalently minimize a more convenient quantity", taking a logarithm and putting a 1/N in front. If I want to minimize A, I might as well minimize 2A or 1/2 A, hence the 1/N is not a problem. It is just there for "convenience" (e.g. the typical scale of the resulting numbers), as the authors state.

Regarding the first point I have nothing meaningful to say, as I know next to nothing about the subject, but based on the other two points I would take the criticism expressed here by TLMFQS with a rather large grain of salt.
Thanks for the clarification. Regarding the first point, my personal opinion is that regression in the statistics community (which does focus on double-checking the validity of the assumptions with residual analysis) is not fully the same as regression in the machine learning community. Given the difference in focus, it is possible that the necessary materials for different communities are different.
__________________
When one teaches, two learn.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 02:59 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.