View Single Post
  #1  
Old 11-17-2018, 09:25 PM
grantrostig grantrostig is offline
Junior Member
 
Join Date: May 2017
Location: Austin, TX
Posts: 4
Post Rethinking statistical learning theory: learning using statistical invariants-Vapnik

New paper:

Rethinking statistical learning theory: learning using statistical invariants
Vladimir Vapnik and Rauf Izmailov
https://link.springer.com/article/10...994-018-5742-0

After doing the Edx course and reading the book and e-chapters, I was of course excited to find Vapnik's possibly important new improved approach to ML.

The two main ideas, which I believe are new:

1) vSVM - the SVM but with V-matrix
2) Using Statistical Invariants to improve convergence without extra training samples. LUSI Learning Using Statistical Invariants.

Here is the paper, which I have not read, because I can't get a copy that fits my budget (less that the $39 from Springer).
Vapnik, V. & Izmailov, R. Mach Learn (2018). https://doi.org/10.1007/s10994-018-5742-0

However, Vapnik has given at least three related lectures in late 2018 (including slides with complicated math), one of which is on youtube here:
https://www.youtube.com/watch?v=rNd7PDdhl4c

Most intriguing to me, is his comment suggesting that these new techniques are more powerful than deep neural networks. I didn't think that was currently possible in general, or at least not in certain domains, ie. image recognition etc. I'm probably missing something.

Can anyone, or the authors , comment in detail about how this paper fits into the framework of ideas presented in the book?
Reply With Quote