View Single Post
  #5  
Old 02-12-2013, 03:27 AM
gah44 gah44 is offline
Invited Guest
 
Join Date: Jul 2012
Location: Seattle, WA
Posts: 153
Default Re: LRA -> PLA Effect of Alpha

Quote:
Originally Posted by tcristo View Post
I originally had my \alpha set at one. I was surprised that running the LRA first to preset the weights and then running the PLA didn't significantly decrease the number of iterations required. I am getting a 50% reduction or thereabouts and expected an order of magnitude reduction. When you view it graphically, the LRA does what seems like 98+% of the work most of the time.

(snip)
I wondered about this in the class discussion, but I only noticed this one now.

As the problem is done with \alpha=1 then, as you note, the effect is small. What it seems is that if the LRA solution correctly classifies the points, then no cycles of PLA are used, otherwise just about as many as before. The 50% is the cases where no cycles of PLA are used.
Reply With Quote