David_Chapman comments on Probability, knowledge, and meta-probability - Less Wrong

38 Post author: David_Chapman 17 September 2013 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (71)

You are viewing a single comment's thread. Show more comments above.

Comment author: David_Chapman 16 September 2013 10:40:54PM 1 point [-]

Are you claiming there's no prior distribution over sequences which reflects our knowledge?

No. Well, not so long as we're allowed to take our own actions into account!

I want to emphasize—since many commenters seem to have mistaken me on this—that there's an obvious, correct solution to this problem (which I made explicit in the OP). I deliberately made the problem as simple as possible in order to present the A_p framework clearly.

Are we talking about the Laplace vs. fair coins?

Not sure what you are asking here, sorry...

Comment author: Eliezer_Yudkowsky 16 September 2013 11:03:16PM 5 points [-]

Are you claiming there's no prior distribution over sequences which reflects our knowledge?

No. Well, not so long as we're allowed to take our own actions into account!

Heh! Yes, traditional causal models have structure beyond what is present in the corresponding probability distribution over those models, though this has to do with computing counterfactuals rather than meta-probability or estimate instability. Work continues at MIRI decision theory workshops on the search for ways to turn some of this back into probability, but yes, in my world causal models are things we assign probabilities to, over and beyond probabilities we assign to joint collections of events. They are still models of reality to which a probability is assigned, though. (See Judea Pearl's "Why I Am Only A Half-Bayesian".)

Comment author: IlyaShpitser 16 September 2013 11:12:36PM *  2 points [-]

I don't really understand what "being Bayesian about causal models" means. What makes the most sense (e.g. what people typicalliy do) is:

(a) "be Bayesian about statistical models", and

(b) Use additional assumptions to interpret the output of (a) causally.


(a) makes sense because I understand how evidence help me select among sets of statistical alternatives.

(b) also makes sense, but then no one will accept your answer without actually verifying the causal model by experiment -- because your assumptions linking the statistical model to a causal one may not be true. And this game of verifying these assumptions doesn't seem like a Bayesian kind of game at all.

I don't know what it means to use Bayes theorem to select among causal models directly.

Comment author: Eliezer_Yudkowsky 16 September 2013 11:23:30PM 3 points [-]

It means that you figure out which causal models look more or less like what you observed.

More generally: There's a language of causal models which, we think, allows us to describe the actual universe, and many other universes besides. Some of these models are simpler than others. Any given sequence of experiences has some probability of being encountered in a given causal universe.