LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 3 - The Linear Model (http://book.caltech.edu/bookforum/forumdisplay.php?f=110)
-   -   Pocket Algorithm and Proof of convergence (http://book.caltech.edu/bookforum/showthread.php?t=4291)

udaykamath 05-16-2013 02:39 PM

Pocket Algorithm and Proof of convergence
 
Prof Yaser

Can you suggest paper(s) which refer to pocket algorithm has better convergence or more optimality ?

Having validation set independent and verified at each iteration and keeping the best iteration seems to have some property of convergence as increasing weights below/above changes the dynamics. But i wanted to know if someone has already done some rigorous analysis on that.

Thanks a ton!
Uday Kamath
PhD Candidate

htlin 05-16-2013 04:32 PM

Re: Pocket Algorithm and Proof of convergence
 
Quote:

Originally Posted by udaykamath (Post 10850)
Prof Yaser

Can you suggest paper(s) which refer to pocket algorithm has better convergence or more optimality ?

Having validation set independent and verified at each iteration and keeping the best iteration seems to have some property of convergence as increasing weights below/above changes the dynamics. But i wanted to know if someone has already done some rigorous analysis on that.

Thanks a ton!
Uday Kamath
PhD Candidate

The following paper studies algorithms for the perceptron model from an optimization perspective:

http://www.csie.ntu.edu.tw/~htlin/pa...ijcnn07rcd.pdf

To understand the pocket algorithm, including variants that practically work better than the naive one introduced in the book, a good starting point is its original paper:

http://dx.doi.org/10.1109/72.80230

Hope this helps.

udaykamath 05-16-2013 06:24 PM

Re: Pocket Algorithm and Proof of convergence
 
Prof Lin
Thanks for the first paper, this is hugely nice for me. I have read the original paper about pocket algorithm, i was more interested in any theoretical studies that give pocket algorithm an edge. Something on lines below
1. By having validation set and keeping track of learning rate, the model complexity is controlled and approx generalization curve is simulated and the model corresponding to best in terms of no overfitting and no underfitting is chosen. Now is there a paper or theory that has been explored here?

2. By having validation set, just like in the book or lecture of Prof yaser, there is implicit regularization on weights to prevent overfitting. Is there some paper/research which has explored it?

Thanks a ton!
Uday Kamath


All times are GMT -7. The time now is 02:32 PM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.