![]() |
|
#1
|
|||
|
|||
![]()
Hello, I am having trouble understanding the procedure for binary classification using linear regression.
For ordinary linear regression, as I understand it, we may compute the weights by taking the product pseudo-inverse matrix and the y-vector. In 2D, the line obtained by linear regression is then y = w0 + w1 * x. Now for the binary case, instead of using the actual y-coordinate value of a data point, we use its binary classification relative to the target function. In this case, the only difference would be that the y-vector which is to be multiplied by the pseudo-inverse matrix consists only of +1 and -1 values. However, when I try this I get an hypothesis line nearly perpendicular to the target function. Could someone please clarify? Thanks |
#2
|
|||
|
|||
![]()
On further investigation. I now think that the y-vector should be the vector whose elements are sign(wf[0] + wf[1] * x) where the target function is y = wf[0] + wf[1] * x. I now have an E_in of about 0.13.
|
#3
|
|||
|
|||
![]()
I get average E_in as ~ 0.13, however the answer is shown as [Answer edited out by admin].
What I have done:
What have I done wrong? |
#4
|
||||
|
||||
![]() Quote:
BTW, if you want to discuss specific answers (chosen or excluded), you need to do so in a thread whose title starts with the warning *ANSWER* per the announcement above.
__________________
Where everyone thinks alike, no one thinks very much |
#5
|
|||
|
|||
![]()
Sorry for posting the answer in the comment.
I had a bug in the target function which was causing erroneous partitioning. Once I fixed it, things started working as expected. Thank you! ![]() |
![]() |
Thread Tools | |
Display Modes | |
|
|