
#1




Linear Regression Classification HW2 Q5/6
Hello, I am having trouble understanding the procedure for binary classification using linear regression.
For ordinary linear regression, as I understand it, we may compute the weights by taking the product pseudoinverse matrix and the yvector. In 2D, the line obtained by linear regression is then y = w0 + w1 * x. Now for the binary case, instead of using the actual ycoordinate value of a data point, we use its binary classification relative to the target function. In this case, the only difference would be that the yvector which is to be multiplied by the pseudoinverse matrix consists only of +1 and 1 values. However, when I try this I get an hypothesis line nearly perpendicular to the target function. Could someone please clarify? Thanks 
#2




Re: Linear Regression Classification HW2 Q5/6
On further investigation. I now think that the yvector should be the vector whose elements are sign(wf[0] + wf[1] * x) where the target function is y = wf[0] + wf[1] * x. I now have an E_in of about 0.13.

#3




Re: Linear Regression Classification HW2 Q5/6
I get average E_in as ~ 0.13, however the answer is shown as [Answer edited out by admin].
What I have done:
What have I done wrong? 
#4




Re: Linear Regression Classification HW2 Q5/6
Quote:
BTW, if you want to discuss specific answers (chosen or excluded), you need to do so in a thread whose title starts with the warning *ANSWER* per the announcement above.
__________________
Where everyone thinks alike, no one thinks very much 
#5




Re: Linear Regression Classification HW2 Q5/6
Sorry for posting the answer in the comment.
I had a bug in the target function which was causing erroneous partitioning. Once I fixed it, things started working as expected. Thank you! 
Thread Tools  
Display Modes  

