LFD Book Forum Important details missing
 User Name Remember Me? Password
 Register FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
10-11-2017, 02:42 AM
 TLMFQS Junior Member Join Date: Oct 2017 Posts: 2
Important details missing

This book is missing a lot of very important details.

First of all, it doesn't even mention the necessity of residual analysis of the errors, and how regression is useless unless that fundamental element is checked.

Second, on page 91 how does equation/function 3.8 come about with the information provided?

And right below that, why does the function that is minimizes the ML have a
-(1/N)? As in, where did this (1/N) come from? We can go from a product to a sum of the logs, but we can't just add a 1/N and use the words "We can equivalently ..." to describe the transition.

Can someone clarify these issues?
#2
10-15-2018, 05:39 AM
 stnvntngrn Junior Member Join Date: Sep 2018 Posts: 6
Re: Important details missing

No one seems to have answered this. I figured I would clear this up a bit so that possible future readers do not get the wrong impression.

Let me skip the first point for now.

Regarding getting (3.8) out of the information on page 91, this comes about exactly as the authors pretty much spell out. When y=1, we want to get h(x) = theta(wt x) = theta(y wt x) since y =1. When y=-1, we want to get 1-h(x) = 1- theta(wt x) = theta(- wt x) by the properties of the function theta, and finally we can write this as =theta(y wt x) since y =-1. This is nice because we can now combine the two cases in one formula.

Regarding the second point, the authors write that they can "equivalently minimize a more convenient quantity", taking a logarithm and putting a 1/N in front. If I want to minimize A, I might as well minimize 2A or 1/2 A, hence the 1/N is not a problem. It is just there for "convenience" (e.g. the typical scale of the resulting numbers), as the authors state.

Regarding the first point I have nothing meaningful to say, as I know next to nothing about the subject, but based on the other two points I would take the criticism expressed here by TLMFQS with a rather large grain of salt.
#3
10-20-2018, 09:31 AM
 htlin NTU Join Date: Aug 2009 Location: Taipei, Taiwan Posts: 610
Re: Important details missing

Quote:
 Originally Posted by stnvntngrn No one seems to have answered this. I figured I would clear this up a bit so that possible future readers do not get the wrong impression. Let me skip the first point for now. Regarding getting (3.8) out of the information on page 91, this comes about exactly as the authors pretty much spell out. When y=1, we want to get h(x) = theta(wt x) = theta(y wt x) since y =1. When y=-1, we want to get 1-h(x) = 1- theta(wt x) = theta(- wt x) by the properties of the function theta, and finally we can write this as =theta(y wt x) since y =-1. This is nice because we can now combine the two cases in one formula. Regarding the second point, the authors write that they can "equivalently minimize a more convenient quantity", taking a logarithm and putting a 1/N in front. If I want to minimize A, I might as well minimize 2A or 1/2 A, hence the 1/N is not a problem. It is just there for "convenience" (e.g. the typical scale of the resulting numbers), as the authors state. Regarding the first point I have nothing meaningful to say, as I know next to nothing about the subject, but based on the other two points I would take the criticism expressed here by TLMFQS with a rather large grain of salt.
Thanks for the clarification. Regarding the first point, my personal opinion is that regression in the statistics community (which does focus on double-checking the validity of the assumptions with residual analysis) is not fully the same as regression in the machine learning community. Given the difference in focus, it is possible that the necessary materials for different communities are different.
__________________
When one teaches, two learn.
#4
09-07-2020, 09:41 PM
 LoordEgy Junior Member Join Date: Jul 2018 Posts: 7
Re: Important details missing

Quote:
 Originally Posted by stnvntngrn No one seems to have answered this. I figured I would clear this up a bit so that possible future readers do not get the wrong impression. Let me skip the first point for now. Regarding getting (3.8) out of the information on page 91, this comes about exactly as the authors pretty much spell out. When y=1, we want to get h(x) = theta(wt x) = theta(y wt x) since y =1. When y=-1, we want to get 1-h(x) = 1- theta(wt x) = theta(- wt x) by the properties of the function theta, and finally we can write this as =theta(y wt x) since y =-1. This is nice because we can now combine the two cases in one formula. Regarding the second point, the authors write that they can "equivalently minimize a more convenient quantity", taking a logarithm and putting a 1/N in front. If I want to minimize A, I might as well minimize 2A or 1/2 A, hence the 1/N is not a problem. It is just there for "convenience" (e.g. the typical scale of the resulting numbers), as the authors state. Regarding the first point I have nothing meaningful to say, as I know next to nothing about the subject, but based on the other two points I would take the criticism expressed here by TLMFQS with a rather large grain of salt.
Thank you for the clarification

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 02:14 PM.

 Contact Us - LFD Book - Top

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.