LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   The Final (http://book.caltech.edu/bookforum/forumdisplay.php?f=138)
-   -   On Bayesian learning (http://book.caltech.edu/bookforum/showthread.php?t=4006)

jheum 02-18-2013 11:40 PM

On Bayesian learning
 
I'd like to open a discussion exploring Bayesian Learning and its relationship to "conventional" techniques a bit further than there was time for in the video lectures.

I find the Cox-Jaynes axioms to be fairly compelling. Do you disagree? Is there some weakness there which I've overlooked? And if not, isn't that a fairly solid argument that if you want to use real numbers to reason about uncertainty in the form of continuous beliefs you're pretty much compelled to use probabilities (albeit subjective ones in some cases), and the only universally consistent way to do that is with Bayes theorem?

Second, the lecture mentions the example of a non-informative, uniform prior underperforming a model in which there's a discrete, but unknown value, when the latter is actually the correct model. That's clearly true, and bothered me initially, because it seemed to violate the assertion I just made in the previous paragraph. However, after reflection, I can't help but think that the proper way to handle such a situation in the Bayesian framework would be to have a uniform hyperparamter describing the location of the delta function. I may be mistaken, but I think this fulfills the prediction that a correctly applied Bayesian framework must always perform at least as well as any other approach [at least any approach formulated using continuous, real "beliefs" or probabilities] in the sense of matching the most accurate possible prediction / estimation. (Of course, there's no assertion that the Bayesian formulation need be computationally efficient or analytically tractable in any particular case). Do you agree, or have I misunderstood something?

Thanks!

yaser 02-18-2013 11:57 PM

Re: On Bayesian learning
 
Thank you for opening this discusion.

The question of using a prior to model an unknown quantity is a key question. On the one hand, not all situations that involve an unknown quantity are probabilistic ones. While this last statement can be debated both ways in a practical situation, there are instances where this is self-evident. If you take Chaitin's number \Omega, which provably exists and is unique (for a specific universal Turing Machine), but also (provably) cannot be identified, it provides a case where the prior suggested in the lecture (and articulated by you in terms of a hyperparameter) is patently the only correct one.

Some may view the hyperparameter approach as a legitimate way of fitting the situation in a probabilistic setup, and some may view it as "passing the buck" of the notion of unknown to the hyperparameter, making the prior itself effectively meaningless. Regardless of one's views in this matter, what is clear is that equating being unknown with having a uniform prior, which seems to be common practice in the Bayesian world, is fundamentally flawed.


All times are GMT -7. The time now is 11:11 PM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.