LFD Book Forum Out of syllabus question on Regularization vs Priors
 Register FAQ Calendar Mark Forums Read

#1
11-06-2012, 05:46 PM
 hashable Junior Member Join Date: Jul 2012 Posts: 8
Out of syllabus question on Regularization vs Priors

Since taking this course in Summer 2012, I have tried to read up more about regularization and found out that there are different approaches. The relatively more commonly used are L1 and L2 (covered in class under the name of 'weight-decay') regularization.

There appears to be some mathematical equivalence between using regularization and the usage of prior probabilities (in the Bayesian approach). From what I understand, imposing an L2 penalty is same as imposing a Gaussian prior assumption on the unknown weights. Similarly L1 corresponds to imposing a Laplacian prior.

In the concluding lectures, Professor YAM mentioned that we have to be careful in verifying that our assumptions on priors are valid when going with the Bayesian approach.

If my understanding is correct, the "danger" introduced in choosing priors is identically (mathematically) to the "danger"" introduced by choosing some arbitrary regularization technique. In other words, we have to be equally careful about using the right regularization technique as we need to be about choosing the right prior.

Is my understanding correct? In other words, does the Bayesian approach particularly warrant any more caution, or both approaches warrant the same amount/kind of caution?

PS: For future versions of the class, it would be great if another lecture is added to introduce various regularization techniques since in practice it appears that L1 is being used everywhere "big data" for its sparsity benefits.
#2
11-07-2012, 01:29 AM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: Out of syllabus question on Regularization vs Priors

Quote:
 Originally Posted by hashable Since taking this course in Summer 2012, I have tried to read up more about regularization and found out that there are different approaches. The relatively more commonly used are L1 and L2 (covered in class under the name of 'weight-decay') regularization. There appears to be some mathematical equivalence between using regularization and the usage of prior probabilities (in the Bayesian approach). From what I understand, imposing an L2 penalty is same as imposing a Gaussian prior assumption on the unknown weights. Similarly L1 corresponds to imposing a Laplacian prior. In the concluding lectures, Professor YAM mentioned that we have to be careful in verifying that our assumptions on priors are valid when going with the Bayesian approach. If my understanding is correct, the "danger" introduced in choosing priors is identically (mathematically) to the "danger"" introduced by choosing some arbitrary regularization technique. In other words, we have to be equally careful about using the right regularization technique as we need to be about choosing the right prior. Is my understanding correct? In other words, does the Bayesian approach particularly warrant any more caution, or both approaches warrant the same amount/kind of caution? PS: For future versions of the class, it would be great if another lecture is added to introduce various regularization techniques since in practice it appears that L1 is being used everywhere "big data" for its sparsity benefits.
Thank you for this important post.

The equivalence you mention would hold if there was no regularization parameter that is to be determined using validation techniques. The parameter can be thought of as a reality check (data check) on the assumption that the chosen form of regularization is valid. This parameter can completely overrule the assumption (such that ) if need be. The parameter can also be incorporated in the Bayesian analysis as a hyperparameter in a "hyperprior."

Certainly more time could have been spent on regularization in the course (as well as on other deserving topics). However, I feel that the time constraint was in fact beneficial in forcing us to focus on the essential. The main message in regularization is that it is fundamentally a heuristic, albeit with some mathematical backbone. As you mention, different regularizers are suited for different situations, and this is determined in practice rather than in theory. This message in and of itself is perhaps the most essential message to convey.
__________________
Where everyone thinks alike, no one thinks very much

 Tags prior, regularization

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 07:29 AM.