Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Let us start with a (non-quantum) logical coinflip - say, look at the heretofore-unknown-to-us-personally 256th binary digit of pi, where the choice of binary digit is itself intended not to be random.
If the result of this logical coinflip is 1 (aka "heads"), we'll create 18 of you in green rooms and 2 of you in red rooms, and if the result is "tails" (0), we'll create 2 of you in green rooms and 18 of you in red rooms.
After going to sleep at the start of the experiment, you wake up in a green room.
With what degree of credence do you believe - what is your posterior probability - that the logical coin came up "heads"?
There are exactly two tenable answers that I can see, "50%" and "90%".
Suppose you reply 90%.
And suppose you also happen to be "altruistic" enough to care about what happens to all the copies of yourself. (If your current system cares about yourself and your future, but doesn't care about very similar xerox-siblings, then you will tend to self-modify to have future copies of yourself care about each other, as this maximizes your expectation of pleasant experience over future selves.)
Then I attempt to force a reflective inconsistency in your decision system, as follows:
I inform you that, after I look at the unknown binary digit of pi, I will ask all the copies of you in green rooms whether to pay $1 to every version of you in a green room and steal $3 from every version of you in a red room. If they all reply "Yes", I will do so.
Followup to: Anthropic Reasoning in UDT by Wei Dai
Suppose that I flip a logical coin - e.g. look at some binary digit of pi unknown to either of us - and depending on the result, either create a billion of you in green rooms and one of you in a red room if the coin came up 1; or, if the coin came up 0, create one of you in a green room and a billion of you in red rooms. You go to sleep at the start of the experiment, and wake up in a red room.
Do you reason that the coin very probably came up 0? Thinking, perhaps: "If the coin came up 1, there'd be a billion of me in green rooms and only one of me in a red room, and in that case, it'd be very surprising that I found myself in a red room."
What is your degree of subjective credence - your posterior probability - that the logical coin came up 1?
There are only two answers I can see that might in principle be coherent, and they are "50%" and "a billion to one against".
Tomorrow I'll talk about what sort of trouble you run into if you reply "a billion to one".
But for today, suppose you reply "50%". Thinking, perhaps: "I don't understand this whole consciousness rigamarole, I wouldn't try to program a computer to update on it, and I'm not going to update on it myself."
In that case, why don't you believe you're a Boltzmann brain?