Dragging up anthropic questions and quantum immortality: suppose I am Schrodinger's cat. I enter the box ten times (each time it has a .5 probability of killing me), and survive. If I started with a .5 belief in QI, my belief is now 1024/1025.
But if you are watching, your belief in QI should not change. (If QI is true, the only outcome I can observe is surviving, so P_me(I survive | QI) = 1. But someone else can observe my death even if QI is true, so P_you(I survive | QI) = 1/1024 = P_you(I survive | ~QI).)
Aumann's agreement theorem says that if we share priors and have mutual knowledge of each others' posteriors, we should each update to hold the same posteriors. Aumann doesn't require that we share observations, but in this case we're doing that too. So, what should we each end up believing? If you update in my direction, then every time anybody does something risky and survives, your belief in QI should go up. But if not, then I'm not allowed to update my belief in QI even if I survive the box once a day for a thousand years. Neither of those seems sensible.
Does Aumann make an implicit assumption that we agree on all possible values of P(evidence | model); if so, is that a safe assumption to make even in normal applications? (Granted, the assumptions of "common priors" and "Bayesian rationalists" are unsafe, so this might not cost much.)
I don't think Aumann's agreement theorem is the problem here.
If QI is true, the only outcome I can observe is surviving.
What does it mean for QI to be true or false? What would you expect to happen differently? Certainly, whether or not QI is true, the only outcome you can observe is surviving, so I don't see how you're updating your belief.
Previously: round 1, round 2, round 3
From the original thread:
Ask away!