LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   The Final (http://book.caltech.edu/bookforum/forumdisplay.php?f=138)
-   -   Questions on the Bayesian Prior (http://book.caltech.edu/bookforum/showthread.php?t=642)

sakumar 06-07-2012 11:24 AM

Questions on the Bayesian Prior
 
I asked the TA this during the lecture but I am not sure I understood his answer.

Say in gradient descent (or perceptron) we start with an initial guess for w. Then we proceed to modify w as we process the training data till we get a satisfactory w.

I believe our initial w is not the same as the "Bayesian prior". Why is that? Is it because we are guessing an actual number instead of imposing a probability distribution on the values for w? How would gradient descent change if we modeled w as a Bayesian? In perceptron, when we change w based on a data point that contradicts w's predictions, that is not the same as "conditioning on data"? Why?

Also, somewhat related, I think: Prof Abu-Mostafa explained in the last lecture the second condition where it would be ok to assume a Bayesian Prior. The way I understood it was that if we had enough data points that successive updates would eventually dilute our (perhaps poor estimate of the) original prior, then we're OK? Is my understanding correct? So is that similar to saying our Eout will be low if M is large enough?

yaser 06-07-2012 11:48 AM

Re: Questions on the Bayesian Prior
 
Quote:

Originally Posted by sakumar (Post 2824)
Say in gradient descent (or perceptron) we start with an initial guess for w. Then we proceed to modify w as we process the training data till we get a satisfactory w.

I believe our initial w is not the same as the "Bayesian prior". Why is that? Is it because we are guessing an actual number instead of imposing a probability distribution on the values for w? How would gradient descent change if we modeled w as a Bayesian? In perceptron, when we change w based on a data point that contradicts w's predictions, that is not the same as "conditioning on data"? Why?

You are correct. The initial {\bf w}(0) is not the same as a prior. It does affect the outcome by preferring one local minimum over another, but that is often mitigated by picking the best of several runs.

If we had a prior, and we used gradient descent to pick the hypothesis with the maximum posterior, it would be working on an augmented error of sorts, part of which comes from the prior and plays the role of a regularizer.

Quote:

Also, somewhat related, I think: Prof Abu-Mostafa explained in the last lecture the second condition where it would be ok to assume a Bayesian Prior. The way I understood it was that if we had enough data points that successive updates would eventually dilute our (perhaps poor estimate of the) original prior, then we're OK? Is my understanding correct? So is that similar to saying our Eout will be low if M is large enough?
Indeed, dilution of the role of the prior is the effect we get when we have sufficient data, but it is not the same as saying that E_{\rm out} is low, as the latter depends on other factors such as noise.

sakumar 06-07-2012 01:09 PM

Re: Questions on the Bayesian Prior
 
Thanks for the clarifications, Professor! :bow:


All times are GMT -7. The time now is 02:03 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.