LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 3 - The Linear Model (http://book.caltech.edu/bookforum/forumdisplay.php?f=110)
-   -   Important details missing (http://book.caltech.edu/bookforum/showthread.php?t=4790)

TLMFQS 10-11-2017 01:42 AM

Important details missing
 
This book is missing a lot of very important details.

First of all, it doesn't even mention the necessity of residual analysis of the errors, and how regression is useless unless that fundamental element is checked.

Second, on page 91 how does equation/function 3.8 come about with the information provided?

And right below that, why does the function that is minimizes the ML have a
-(1/N)? As in, where did this (1/N) come from? We can go from a product to a sum of the logs, but we can't just add a 1/N and use the words "We can equivalently ..." to describe the transition.

Can someone clarify these issues?

stnvntngrn 10-15-2018 04:39 AM

Re: Important details missing
 
No one seems to have answered this. I figured I would clear this up a bit so that possible future readers do not get the wrong impression.

Let me skip the first point for now.

Regarding getting (3.8) out of the information on page 91, this comes about exactly as the authors pretty much spell out. When y=1, we want to get h(x) = theta(wt x) = theta(y wt x) since y =1. When y=-1, we want to get 1-h(x) = 1- theta(wt x) = theta(- wt x) by the properties of the function theta, and finally we can write this as =theta(y wt x) since y =-1. This is nice because we can now combine the two cases in one formula.

Regarding the second point, the authors write that they can "equivalently minimize a more convenient quantity", taking a logarithm and putting a 1/N in front. If I want to minimize A, I might as well minimize 2A or 1/2 A, hence the 1/N is not a problem. It is just there for "convenience" (e.g. the typical scale of the resulting numbers), as the authors state.

Regarding the first point I have nothing meaningful to say, as I know next to nothing about the subject, but based on the other two points I would take the criticism expressed here by TLMFQS with a rather large grain of salt.

htlin 10-20-2018 08:31 AM

Re: Important details missing
 
Quote:

Originally Posted by stnvntngrn (Post 13150)
No one seems to have answered this. I figured I would clear this up a bit so that possible future readers do not get the wrong impression.

Let me skip the first point for now.

Regarding getting (3.8) out of the information on page 91, this comes about exactly as the authors pretty much spell out. When y=1, we want to get h(x) = theta(wt x) = theta(y wt x) since y =1. When y=-1, we want to get 1-h(x) = 1- theta(wt x) = theta(- wt x) by the properties of the function theta, and finally we can write this as =theta(y wt x) since y =-1. This is nice because we can now combine the two cases in one formula.

Regarding the second point, the authors write that they can "equivalently minimize a more convenient quantity", taking a logarithm and putting a 1/N in front. If I want to minimize A, I might as well minimize 2A or 1/2 A, hence the 1/N is not a problem. It is just there for "convenience" (e.g. the typical scale of the resulting numbers), as the authors state.

Regarding the first point I have nothing meaningful to say, as I know next to nothing about the subject, but based on the other two points I would take the criticism expressed here by TLMFQS with a rather large grain of salt.

Thanks for the clarification. Regarding the first point, my personal opinion is that regression in the statistics community (which does focus on double-checking the validity of the assumptions with residual analysis) is not fully the same as regression in the machine learning community. Given the difference in focus, it is possible that the necessary materials for different communities are different.


All times are GMT -7. The time now is 10:27 PM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.