![]() |
|
#1
|
|||
|
|||
![]()
If I take output of linear regression, I usually have a little (1-5 of 100) points which are misclassified. But it does not help PLA, because first addition Xi to W destabilize "almost good" value of W. I can even invert sign of L.R. output, it doesn't affect number of iterations (despite number of misclassified items on first iteration changes from 1-5 to 95-99, I verified)
But if I use (L.R. output)*N, it does help for PLA, greatly reduces number of iterations. Is it supposed, that usage of linear regression output as W should greatly change number of iterations? If yes, does it means I should to search for other bugs in my code? ![]() Output of L.R. should be used as is, or should be multiplied to make it "stronger" ? |
#2
|
|||
|
|||
![]() Quote:
|
#3
|
|||
|
|||
![]()
This issue is discussed in another thread. Short answer: scaling the weights by N might make a difference, but for the homework question, we should follow the directions as given and not scale.
|
#4
|
|||
|
|||
![]()
Anne!
![]() |
#5
|
|||
|
|||
![]()
Hi Michael!
My post was from the last iteration of the course (look at the date). I'm not taking it this time. I only happened to see your post because I was randomly surfing around. Good luck to all the DA students taking this course now. |
#6
|
|||
|
|||
![]()
Oh, my bad. I did see your exalted Senior Member status and I thought that meant T.A. or such.
|
![]() |
Thread Tools | |
Display Modes | |
|
|