You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

dumky comments on Some thoughts on meta-probabilties - Less Wrong Discussion

0 Post author: iarwain1 21 September 2015 05:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread.

Comment author: dumky 21 September 2015 05:48:45PM 1 point [-]

If I remember correctly, Jaynes discusses this in Probability Theory and arrives at the conclusion that if a reasoning robot assigned a probability to its changing its mind a certain way, then it should update its belief now. Of course, the general caveat here: humans are not robots, they don't perfectly adhere to either formal logical or plausible reasoning.

Comment author: MrMind 22 September 2015 09:32:49AM *  3 points [-]

If I remember correctly, Jaynes discusses this in Probability Theory

He does, it's in the chapter about Ap distribution, which are basically meta-probability, or better, Ap is the probability assigned to receive a future evidence that will put the probability of A at p. Formally P(A|Ap) = p.
From this you can show that P(A) is the expected value of the Ap distribution.

Comment author: buybuydandavis 23 September 2015 02:37:24AM 2 points [-]

The Chapter is "Inner and Outer Robots", available here:

http://www-biba.inrialpes.fr/Jaynes/cc18i.pdf

The outer robot, thinking about the real world, uses Aristotelian propositions referring to that world. The inner robot, thinking about the activities of the outer robot, uses propositions that are not Aristotelian in reference to the outer world; but they are still Aristotelian in its context, in reference to the thinking of the outer robot; so of course the same rules of probability theory will apply to them. The term `probability of a probability' misses the point, since the two probabilities are at different levels.

This always seemed like a real promising idea to me. Alas, I have a day job, and it isn't as a Prof.

Comment author: Dagon 21 September 2015 09:24:57PM 0 points [-]

Ideally, your current probability should include the probabilility-weighted average of all possible future evidence. This is required for consistency of probability across those evidence-producing timelines. Collectively, the set of probabilities of future experiences is your prior.

But this article isn't talking about belief or decision-making, it's talking about communication (and perhaps encoding in a limited storage mechanism like a brain). You really don't have the power to do that calculation well, nor to communicate in this level of detail. The idea of a probability range or probability curve is one reasonable (IMO) way to summarize a large set of partly-correlated future evidence.