dumky comments on Some thoughts on meta-probabilties - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (15)
If I remember correctly, Jaynes discusses this in Probability Theory and arrives at the conclusion that if a reasoning robot assigned a probability to its changing its mind a certain way, then it should update its belief now. Of course, the general caveat here: humans are not robots, they don't perfectly adhere to either formal logical or plausible reasoning.
He does, it's in the chapter about Ap distribution, which are basically meta-probability, or better, Ap is the probability assigned to receive a future evidence that will put the probability of A at p. Formally P(A|Ap) = p.
From this you can show that P(A) is the expected value of the Ap distribution.
The Chapter is "Inner and Outer Robots", available here:
http://www-biba.inrialpes.fr/Jaynes/cc18i.pdf
This always seemed like a real promising idea to me. Alas, I have a day job, and it isn't as a Prof.
Ideally, your current probability should include the probabilility-weighted average of all possible future evidence. This is required for consistency of probability across those evidence-producing timelines. Collectively, the set of probabilities of future experiences is your prior.
But this article isn't talking about belief or decision-making, it's talking about communication (and perhaps encoding in a limited storage mechanism like a brain). You really don't have the power to do that calculation well, nor to communicate in this level of detail. The idea of a probability range or probability curve is one reasonable (IMO) way to summarize a large set of partly-correlated future evidence.