Matt_Simpson comments on Open Thread: July 2010 - Less Wrong

6 Post author: komponisto 01 July 2010 09:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (653)

You are viewing a single comment's thread. Show more comments above.

Comment author: cupholder 06 July 2010 09:40:07AM 0 points [-]

My implicit definition of perfect Bayesian is characterized by these propostions:

  1. There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
  2. Given a particular set of evidence, there is a correct posterior probability for any proposition

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different. I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?

If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There's no model checking because there is no model.

There has to be a model because the model is what we use to calculate likelihoods.

The rationale for model checking should be pretty clear ...

Agree with this whole paragraph. I am in favor of model checking; my beef is with (what I understand to be) Perfect Bayesianism, which doesn't seem to include a step for stepping outside the current model and checking that the model itself - and not just the parameter values - makes sense in light of new data.

I spent a few days wondering how it squared with the Baysian philosophy of induction, and then what I took to be obvious answer came to me (while discussing it with my professor actually): we're modeling our uncertainty.

The catch here (if I'm interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn't good enough if that sub-model gets blindsided with unmodeled uncertainty that can't be accounted for just by juggling probability density around in our parameter space.* From page 8 of their preprint:

If nothing else, our own experience suggests that however many different specifications we think of, there are always others which had not occurred to us, but cannot be immediately dismissed a priori, if only because they can be seen as alternative approximations to the ones we made. Yet the Bayesian agent is required to start with a prior distribution whose support covers all alternatives that could be considered.

* This must be one of the most dense/opaque sentences I've posted on Less Wrong. If anyone cares enough about this comment to want me to try and break down what it means with an example, I can give that a shot.

Comment author: Matt_Simpson 06 July 2010 04:22:27PM *  1 point [-]

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different.

They most certainly are. But it's semantics.

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Frankly, I'm not informed enough about priors commit to maxent, Kolmogorov complexity, or anything else.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?

yes

There has to be a model because the model is what we use to calculate likelihoods.

aaahhh.... I changed the language of that sentence at least three times before settling on what you saw. Here's what I probably should have posted (and what I was going to post until the last minute):

There's no model checking because there is only one model - the correct model.

That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.

The catch here (if I'm interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn't good enough if that sub-model gets blindsided with unmodeled uncertainty that can't be accounted for just by juggling probability density around in our parameter space.*

For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don't consider me representative.

Are you asserting that this a catch for my position? Or the "never look back" approach to priors? What you are saying seems to support my argument.

Comment author: cupholder 07 July 2010 07:10:19AM 0 points [-]

yes

OK. I agree with that insofar as agents having the same prior entails them having the same model.

aaahhh.... I changed the language of that sentence at least three times before settling on what you saw. Here's what I probably should have posted (and what I was going to post until the last minute):

There's no model checking because there is only one model - the correct model.

That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.

Ah, I think I get you; a PB (perfect Bayesian) doesn't see a need to test their model because whatever specific proposition they're investigating implies a particular correct model.

For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don't consider me representative.

Yeah, I figured you wouldn't have trouble with it since you talked about taking classes in this stuff - that footnote was intended for any lurkers who might be reading this. (I expected quite a few lurkers to be reading this given how often the Gelman and Shalizi paper's been linked here.)

Are you asserting that this a catch for my position? Or the "never look back" approach to priors? What you are saying seems to support my argument.

It's a catch for the latter, the PB. In reality most scientists typically don't have a wholly unambiguous proposition worked out that they're testing - or the proposition they are testing is actually not a good representation of the real situation.