Mallah comments on Avoiding doomsday: a "proof" of the self-indication assumption - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (228)
Sure.
Wrong. Your epistemic situation is no longer the same after the announcement.
In a single-run (one-small-world) scenario, the coin has a 50% to come up tails or heads. (In a MWI or large universe with similar situations, it would come up both, which changes the results. The MWI predictions match yours but don't back the SIA). Here I assume the single-run case.
The prior for the coin result is 0.5 for heads, 0.5 for tails.
Before the killing, P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99. So far we agree.
P(red|before) = 0.5 (0.01) + 0.5 (0.01) = 0.01
Afterwards, P'(red|heads) = 0, P'(red|tails) = 1, P'(blue|heads) = 1, P'(blue|tails) = 0.
P(red|after) = 0.5 (0) + 0.5 (1) = 0.5
So after the killing, you should expect either color door to be 50% likely.
This, of course, is exactly what the SIA denies. The SIA is obviously false.
So why does the result seem counterintuitive? Because in practice, and certainly when we evolved and were trained, single-shot situations didn't occur.
So let's look at the MWI case. Heads and tails both occur, but each with 50% of the original measure.
Before the killing, we again have P(heads) =P(tails) = 0.5
and P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99.
Afterwards, P'(red|heads) = 0, P'(red|tails) = 1, P'(blue|heads) = 1, P'(blue|tails) = 0.
Huh? Didn't I say it was different? It sure is, because afterwards, we no longer have P(heads) = P(tails) = 0.5. On the contrary, most of the conscious measure (# of people) now resides behind the blue doors. We now have for the effective probabilities P(heads) = 0.99, P(tails) = 0.01.
P(red|after) = 0.99 (0) + 0.01 (1) = 0.01
No; you need to apply Bayes theorem here. Intuitively, before the killing you are 99% sure you're behind a blue door, and if you survive you should take it as evidence that "yay!" the coin in fact did not land tails (killing blue). Mathematically, you just have to remember to use your old posteriors as your new priors:
P(red|survival) = P(red)·P(survival|red)/P(survival) = 0.01·(0.5)/(0.5) = 0.01
So SIA + Bayesian updating happens to agree with the "quantum measure" heuristic in this case.
However, I am with Nick Bodstrom in rejecting SIA in favor of his "Observation Equation" derived from "SSSA", precisely because that is what maximizes the total wealth of your reference class (at least when you are not choosing whether to exist or create dupcicates).