Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Bayesian Flame - Less Wrong

37 Post author: cousin_it 26 July 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 26 July 2009 05:39:09PM 17 points [-]

a good Bayesian must never be uncertain about the probability of any future event

Who? Whaa? Your probability is your uncertainty.

Comment author: marks 28 July 2009 07:06:50AM 1 point [-]

I think what Shalizi means is that a Bayesian model is never "wrong", in the sense that it is a true description of the current state of the ideal Bayesian agent's knowledge. I.e., if A says an event X has probability p, and B says X has probability q, then they aren't lying even if p!=q. And the ideal Bayesian agent updates that knowledge perfectly by Bayes' rule (where knowledge is defined as probability distributions of states of the world). In this case, if A and B talk with each other then they should probably update, of course.

In frequentist statistics the paradigm is that one searches for the 'true' model by looking through a space of 'false' models. In this case if A says X has probability p and B says X has probability q != p then at least one of them is wrong.

Comment author: orthonormal 26 July 2009 08:21:36PM 4 points [-]

Also, didn't we already cover metauncertainty here?

Comment author: Cyan 26 July 2009 09:24:53PM *  1 point [-]

Yup. Shalizi's point is that once you've taken meta-uncertainty into account (by marginalizing over it), you have a precise and specific probability distribution over outcomes.

Comment author: Eliezer_Yudkowsky 26 July 2009 09:36:14PM 14 points [-]

Well, yes. You have to bet at some odds. You're in some particular state of uncertainty and not a different one. I suppose the game is to make people think that being in some particular state of uncertainty, corresponds to claiming to know too much about the problem? The ignorance is shown in the instability of the estimate - the way it reacts strongly to new evidence.

Comment author: Cyan 26 July 2009 10:35:19PM *  6 points [-]

I'm with you on this one. What Shalizi is criticizing is essentially a consequence of the desideratum that a single real number shall represent the plausibility of an event. I don't think the methods he's advocating dispense with the desideratum, so I view this as a delicious bullet-shaped candy that he's convinced is a real bullet and is attempting to dodge.

Comment author: Nick_Tarleton 26 July 2009 08:29:33PM *  2 points [-]

Shalizi says "Bayesian agents never have the kind of uncertainty that Rebonato (sensibly) thinks people in finance should have". My guess is that this means (something that could be described as) uncertainty as to how well-calibrated one is, which AFAIK hasn't been explicitly covered here.