LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 3 - The Linear Model (http://book.caltech.edu/bookforum/forumdisplay.php?f=110)
-   -   PLA vs Linear Regression (http://book.caltech.edu/bookforum/showthread.php?t=310)

jg2012 04-10-2012 04:18 PM

PLA vs Linear Regression
 
Can we ask week 2 lecture questions on the Linear Model here?

Building a Perceptron classifier in the week 1 homework helped me see how the weight vector, w, defines the decision boundary (or line). For two-dimensional input, w0 + w1*x1 + w2*x2 = 0 is the equation for the decision line.

But when doing linear regression it is more like fitting a (hyper-)line to data points, which seems orthogonal to a decision boundary. I'm confused on how to reconcile these points of view. Does training w with PLA yield roughly the same results as training w using linear regression in which the y values are set to -1,1 for the two classes?

Here's what I'm thinking so far: Maybe linear regression with x1, x2, and y={-1,1} finds the equation of a plane that 'passes through' _all_ of the data best (in the least squares sense), _not_ a plane that separates the two classes. Then where the plane passes through y=0, maybe that would be similar to the line found from PLA. But if this interpretation is right, wouldn't the w vector for linear regression have 4 numbers for this example (so it defines a plane), whereas w for PLA only had three?

(If this view is right then I can see how, as mentioned in class, the spread of training examples further away pull the plane towards that class, moving the decision line further into that class in an unwanted way...)

magdon 04-10-2012 06:15 PM

Re: PLA vs Linear Regression
 
Yes, questions relating to the book/course can be asked here at any time. This is a thought provoking question.

Indeed there are a several things to think about.

1) There is a difference between the classification function learned by PLA and the classification boundary (a line) which separates +1 from -1. The classification function attaches a value (\pm 1) to every point in the input space. The classification function learned by PLA is a halfspace of +1 and a halfspace of -1. It is this classification function that is analogous to the learned linear regression function, which also attaches a value to every point in the input space -- except that this linear regression function attaches not just \pm 1 but any real value.

2) Yes, setting the linear regression function to 0 in some sense generates a linear boundary that 'separates' the region between where the regression function is positive and where it is negative. In fact one use of regression is to classify the space into +1 and -1 in exactly this way.

Hope this helps. You may also find problem 3.13 in the book providing an interesting link between classification in 2-d and regression in 1-d.

Quote:

Originally Posted by jg2012 (Post 1140)
Can we ask week 2 lecture questions on the Linear Model here?

Building a Perceptron classifier in the week 1 homework helped me see how the weight vector, w, defines the decision boundary (or line). For two-dimensional input, w0 + w1*x1 + w2*x2 = 0 is the equation for the decision line.

But when doing linear regression it is more like fitting a (hyper-)line to data points, which seems orthogonal to a decision boundary. I'm confused on how to reconcile these points of view. Does training w with PLA yield roughly the same results as training w using linear regression in which the y values are set to -1,1 for the two classes?

Here's what I'm thinking so far: Maybe linear regression with x1, x2, and y={-1,1} finds the equation of a plane that 'passes through' _all_ of the data best (in the least squares sense), _not_ a plane that separates the two classes. Then where the plane passes through y=0, maybe that would be similar to the line found from PLA. But if this interpretation is right, wouldn't the w vector for linear regression have 4 numbers for this example (so it defines a plane), whereas w for PLA only had three?

(If this view is right then I can see how, as mentioned in class, the spread of training examples further away pull the plane towards that class, moving the decision line further into that class in an unwanted way...)



All times are GMT -7. The time now is 06:16 PM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.