Rethinking statistical learning theory: learning using statistical invariantsVapnik
New paper:
Rethinking statistical learning theory: learning using statistical invariants Vladimir Vapnik and Rauf Izmailov https://link.springer.com/article/10...99401857420 After doing the Edx course and reading the book and echapters, I was of course excited to find Vapnik's possibly important new improved approach to ML. The two main ideas, which I believe are new: 1) vSVM  the SVM but with Vmatrix 2) Using Statistical Invariants to improve convergence without extra training samples. LUSI Learning Using Statistical Invariants. Here is the paper, which I have not read, because I can't get a copy that fits my budget (less that the $39 from Springer). Vapnik, V. & Izmailov, R. Mach Learn (2018). https://doi.org/10.1007/s1099401857420 However, Vapnik has given at least three related lectures in late 2018 (including slides with complicated math), one of which is on youtube here: https://www.youtube.com/watch?v=rNd7PDdhl4c Most intriguing to me, is his comment suggesting that these new techniques are more powerful than deep neural networks. I didn't think that was currently possible in general, or at least not in certain domains, ie. image recognition etc. I'm probably missing something. Can anyone, or the authors :), comment in detail about how this paper fits into the framework of ideas presented in the book? 
Re: Rethinking statistical learning theory: learning using statistical invariantsVap
which text book does he talk about at 26:28 ?

All times are GMT 7. The time now is 03:34 PM. 
Powered by vBulletin® Version 3.8.3
Copyright ©2000  2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. AbuMostafa, Malik MagdonIsmail, and HsuanTien Lin, and participants in the Learning From Data MOOC by Yaser S. AbuMostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.