![]() |
|
#1
|
|||
|
|||
![]()
I don't quite understand the first classification method given by the problem: "Linear Regression for classification followed by pocket for improvement". Since the weight returned by linear regression is an analytically optimal result, how can the pocket algorithm improve it?
|
#2
|
||||
|
||||
![]()
It is only analytically optimal for regression. It can be suboptimal for classification.
Quote:
__________________
Have faith in probability |
#3
|
|||
|
|||
![]()
Hi Professor, you said that the weight vector w learnted from the Linear Regression could be suboptimal for classification. However, after run the pocket algorithm with 1,000,000 iteration, the w still not change, which means that the w learnt from the Linear Regression is optimal. Is that true? Maybe I made some mistake.
|
#4
|
|||
|
|||
![]()
The pocket algorithm indeed is able to improve the linear regression. Mine decreased the in-sample error from 0.8% to around 0.4%.
|
#5
|
|||
|
|||
![]()
Do you set the w learnt from Linear Regression as the initial w for Pocket Algorithm? I did like that, but without any improvement. Maybe some mistakes.
|
#6
|
|||
|
|||
![]()
Yes, I did. I guess you probably should look at your implementation of the pocket algorithm. I also got no improvement at first. But then I messed around the code a little bit, and it worked.
|
#7
|
||||
|
||||
![]()
Any one of these three can happen:
1) the linear regression weights are optimal 2) the linear regression weights are not optimal and the PLA/Pocket algorithm can improve the weights. 3) the linear regression weights are not optimal and the PLA/Pocket algorithm cannot improve the weights. In practice, we will not know which case we are in because actually finding the optimal weights is an NP-hard combinatorial optimization problem. However, no matter which case we are in, other than some extra CPU cycles, there is no harm done in running the pocket algorithm on the regression weights to see if they can be improved. Quote:
__________________
Have faith in probability |
#8
|
|||
|
|||
![]() Quote:
|
#9
|
||||
|
||||
![]()
You can use the weights produced by logistic regression for classification.
__________________
Have faith in probability |
![]() |
Thread Tools | |
Display Modes | |
|
|