LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 1 - The Learning Problem (http://book.caltech.edu/bookforum/forumdisplay.php?f=108)

 don slowik 12-09-2017 10:05 AM

I had bad luck with the ALA: for all but the smallest training data sets and with more than 2 dimensions, the weights would go scooting off to infinity.

I modified the algorithm so as to become a regression vs categorization problem, I changed the update criteria to be:
Code:

```s = np.dot(x[i,:], w) if np.abs(y[i] - s) > 0.01:                      w = w + eta * (y[i] - s) * x[i,:]   n_updates += 1```
This worked very well, with eta set to 0.1, for training sets of size N=1000 in d=10 dimensions required only 2.7 +/-1.1 iterations through the data to achieve the tolerance of 0.1 on every training data point. PLA on the same training data required about 750 iterations.

So rather than choosing a plane that separates the data, this chooses the plan that gets the correct distance (within the 0.01) between the plan and the data point for every training data point.

 don slowik 12-10-2017 06:44 AM

Though this is interesting, on further thought, it seems to be quite useless. The y associated with each training point is the distance between that point to the separating plane. So you would have to know the plane to begin with..

 htlin 12-13-2017 01:34 PM

This looks like the Adaline algorithm, by the way.

 don slowik 12-18-2017 05:53 PM

Actually, it isn't that useless if the data happens to be that then adaline is a quick way of converging to a plane that fits the data. Yes, thanks for that wikipedia reference.

 pdsubraa 12-28-2017 01:28 AM

Well Said Don!

Wikipedia reference was helpful - Thanks Htlin!

 All times are GMT -7. The time now is 06:47 AM.