![]() |
#1
|
|||
|
|||
![]()
This book is missing a lot of very important details.
First of all, it doesn't even mention the necessity of residual analysis of the errors, and how regression is useless unless that fundamental element is checked. Second, on page 91 how does equation/function 3.8 come about with the information provided? And right below that, why does the function that is minimizes the ML have a -(1/N)? As in, where did this (1/N) come from? We can go from a product to a sum of the logs, but we can't just add a 1/N and use the words "We can equivalently ..." to describe the transition. Can someone clarify these issues? |
Thread Tools | |
Display Modes | |
|
|