View Single Post
  #1  
Old 06-07-2012, 11:24 AM
sakumar sakumar is offline
Member
 
Join Date: Apr 2012
Posts: 40
Default Questions on the Bayesian Prior

I asked the TA this during the lecture but I am not sure I understood his answer.

Say in gradient descent (or perceptron) we start with an initial guess for w. Then we proceed to modify w as we process the training data till we get a satisfactory w.

I believe our initial w is not the same as the "Bayesian prior". Why is that? Is it because we are guessing an actual number instead of imposing a probability distribution on the values for w? How would gradient descent change if we modeled w as a Bayesian? In perceptron, when we change w based on a data point that contradicts w's predictions, that is not the same as "conditioning on data"? Why?

Also, somewhat related, I think: Prof Abu-Mostafa explained in the last lecture the second condition where it would be ok to assume a Bayesian Prior. The way I understood it was that if we had enough data points that successive updates would eventually dilute our (perhaps poor estimate of the) original prior, then we're OK? Is my understanding correct? So is that similar to saying our Eout will be low if M is large enough?
Reply With Quote