![]() |
|
#1
|
|||
|
|||
![]()
When applying the one-step equation for linear regression, the vector of weights is obtained directly with all its components.
What if we impose from the beginning a restriction on the form of the hypothesis, say h(x)=3+w1*x1, instead of the full linear form h(x)=w0+w1*x1? In other words, we want w0 to be 3, no matter what. Is the one-step equation still applicable somehow? To compare, if we were to apply the gradient descent with the same constraint, we could do it very easy, just by keeping w0 fixed at its initial value (3). |
#2
|
||||
|
||||
![]() Quote:
![]() ![]() ![]() ![]()
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
Thank you, sir, I think I finally got it. So if I want to pin down a number of M weights, I just move the constant M terms to the y vector (subtracting from it), and I am left with a matrix X with d+1-M columns (the column of ones may also be gone). The result will be a vector of d+1-M weights.
|
![]() |
Tags |
regression constraint |
Thread Tools | |
Display Modes | |
|
|