View Single Post
  #1  
Old 04-10-2012, 04:18 PM
jg2012 jg2012 is offline
Junior Member
 
Join Date: Apr 2012
Posts: 7
Default PLA vs Linear Regression

Can we ask week 2 lecture questions on the Linear Model here?

Building a Perceptron classifier in the week 1 homework helped me see how the weight vector, w, defines the decision boundary (or line). For two-dimensional input, w0 + w1*x1 + w2*x2 = 0 is the equation for the decision line.

But when doing linear regression it is more like fitting a (hyper-)line to data points, which seems orthogonal to a decision boundary. I'm confused on how to reconcile these points of view. Does training w with PLA yield roughly the same results as training w using linear regression in which the y values are set to -1,1 for the two classes?

Here's what I'm thinking so far: Maybe linear regression with x1, x2, and y={-1,1} finds the equation of a plane that 'passes through' _all_ of the data best (in the least squares sense), _not_ a plane that separates the two classes. Then where the plane passes through y=0, maybe that would be similar to the line found from PLA. But if this interpretation is right, wouldn't the w vector for linear regression have 4 numbers for this example (so it defines a plane), whereas w for PLA only had three?

(If this view is right then I can see how, as mentioned in class, the spread of training examples further away pull the plane towards that class, moving the decision line further into that class in an unwanted way...)
Reply With Quote