
#1




LR and PLA with scaled input space
A post at another forum:
Quote:
Put these together and you conclude that, as scales up and down, the impact of the LR solution vector on PLA goes down and up, respectively, and significantly so. On the large extreme, the LR solution behaves like the vector so you get the original PLA iterations. As gets smaller, kicks in as a good initial condition (with nontrivial size) and you gain some PLA iterations. As diminishes, PLA will take longer to correct the misclassified points that the LR didn't get simply because the PLA iteration becomes relatively smaller in the movement that it creates.
__________________
Where everyone thinks alike, no one thinks very much 
Thread Tools  
Display Modes  

