Let us start with a (non-quantum) logical coinflip - say, look at the heretofore-unknown-to-us-personally 256th binary digit of pi, where the choice of binary digit is itself intended not to be random.
If the result of this logical coinflip is 1 (aka "heads"), we'll create 18 of you in green rooms and 2 of you in red rooms, and if the result is "tails" (0), we'll create 2 of you in green rooms and 18 of you in red rooms.
After going to sleep at the start of the experiment, you wake up in a green room.
With what degree of credence do you believe - what is your posterior probability - that the logical coin came up "heads"?
There are exactly two tenable answers that I can see, "50%" and "90%".
Suppose you reply 90%.
And suppose you also happen to be "altruistic" enough to care about what happens to all the copies of yourself. (If your current system cares about yourself and your future, but doesn't care about very similar xerox-siblings, then you will tend to self-modify to have future copies of yourself care about each other, as this maximizes your expectation of pleasant experience over future selves.)
Then I attempt to force a reflective inconsistency in your decision system, as follows:
I inform you that, after I look at the unknown binary digit of pi, I will ask all the copies of you in green rooms whether to pay $1 to every version of you in a green room and steal $3 from every version of you in a red room. If they all reply "Yes", I will do so.
(It will be understood, of course, that $1 represents 1 utilon, with actual monetary amounts rescaled as necessary to make this happen. Very little rescaling should be necessary.)
(Timeless decision agents reply as if controlling all similar decision processes, including all copies of themselves. Classical causal decision agents, to reply "Yes" as a group, will need to somehow work out that other copies of themselves reply "Yes", and then reply "Yes" themselves. We can try to help out the causal decision agents on their coordination problem by supplying rules such as "If conflicting answers are delivered, everyone loses $50". If causal decision agents can win on the problem "If everyone says 'Yes' you all get $10, if everyone says 'No' you all lose $5, if there are conflicting answers you all lose $50" then they can presumably handle this. If not, then ultimately, I decline to be responsible for the stupidity of causal decision agents.)
Suppose that you wake up in a green room. You reason, "With 90% probability, there are 18 of me in green rooms and 2 of me in red rooms; with 10% probability, there are 2 of me in green rooms and 18 of me in red rooms. Since I'm altruistic enough to at least care about my xerox-siblings, I calculate the expected utility of replying 'Yes' as (90% * ((18 * +$1) + (2 * -$3))) + (10% * ((18 * -$3) + (2 * +$1))) = +$5.60." You reply yes.
However, before the experiment, you calculate the general utility of the conditional strategy "Reply 'Yes' to the question if you wake up in a green room" as (50% * ((18 * +$1) + (2 * -$3))) + (50% * ((18 * -$3) + (2 * +$1))) = -$20. You want your future selves to reply 'No' under these conditions.
This is a dynamic inconsistency - different answers at different times - which argues that decision systems which update on anthropic evidence will self-modify not to update probabilities on anthropic evidence.
I originally thought, on first formulating this problem, that it had to do with double-counting the utilons gained by your variable numbers of green friends, and the probability of being one of your green friends.
However, the problem also works if we care about paperclips. No selfishness, no altruism, just paperclips.
Let the dilemma be, "I will ask all people who wake up in green rooms if they are willing to take the bet 'Create 1 paperclip if the logical coinflip came up heads, destroy 3 paperclips if the logical coinflip came up tails'. (Should they disagree on their answers, I will destroy 5 paperclips.)" Then a paperclip maximizer, before the experiment, wants the paperclip maximizers who wake up in green rooms to refuse the bet. But a conscious paperclip maximizer who updates on anthropic evidence, who wakes up in a green room, will want to take the bet, with expected utility ((90% * +1 paperclip) + (10% * -3 paperclips)) = +0.6 paperclips.
This argues that, in general, decision systems - whether they start out selfish, or start out caring about paperclips - will not want their future versions to update on anthropic "evidence".
Well, that's not too disturbing, is it? I mean, the whole anthropic thing seemed very confused to begin with - full of notions about "consciousness" and "reality" and "identity" and "reference classes" and other poorly defined terms. Just throw out anthropic reasoning, and you won't have to bother.
When I explained this problem to Marcello, he said, "Well, we don't want to build conscious AIs, so of course we don't want them to use anthropic reasoning", which is a fascinating sort of reply. And I responded, "But when you have a problem this confusing, and you find yourself wanting to build an AI that just doesn't use anthropic reasoning to begin with, maybe that implies that the correct resolution involves us not using anthropic reasoning either."
So we can just throw out anthropic reasoning, and relax, and conclude that we are Boltzmann brains. QED.
In general, I find the sort of argument given here - that a certain type of decision system is not reflectively consistent - to be pretty damned compelling. But I also find the Boltzmann conclusion to be, ahem, more than ordinarily unpalatable.
In personal conversation, Nick Bostrom suggested that a division-of-responsibility principle might cancel out the anthropic update - i.e., the paperclip maximizer would have to reason, "If the logical coin came up heads then I am 1/18th responsible for adding +1 paperclip, if the logical coin came up tails then I am 1/2 responsible for destroying 3 paperclips." I confess that my initial reaction to this suggestion was "Ewwww", but I'm not exactly comfortable concluding I'm a Boltzmann brain, either.
EDIT: On further reflection, I also wouldn't want to build an AI that concluded it was a Boltzmann brain! Is there a form of inference which rejects this conclusion without relying on any reasoning about subjectivity?
EDIT2: Psy-Kosh has converted this into a non-anthropic problem!
This is my attempt at a pedagogical exposition of “the solution”. It’s overly long, and I've lost perspective completely about what is understood by the group here and what isn't. But since I've written up this solution for myself, I'll go ahead and share it.
The cases I'm describing below are altered from the OP so that they completely non-metaphysical, in the sense that you could implement them in real life with real people. Thus there is an objective reality regarding whether money is collectively lost or won, so there is finally no ambiguity about what the correct calculation actually is.
Suppose that there are twenty different graduate students {Amy, Betty, Cindy, ..., Tony} and two hotels connected by a breezeway. Hotel Green has 18 green rooms and 2 red rooms. Hotel Red has 18 red rooms and 2 green rooms. Every night for many years, students will be assigned a room in either Hotel Green or Hotel Red depending on a coin flip (heads --> Hotel Green for the night, tails --> Hotel Red for the night). Students won’t know what hotel they are in but can see their own room color only. If a student sees a green room, that student correctly deduces they are in Hotel Green with 90% probability.
Case 1: Suppose that every morning, Tony is allowed to bet that he is in a green room. If he bets ‘yes’ and is correct, he pockets $12. If he bets ‘yes’ and is wrong, he has to pay $52. (In other words, his payoff for a correct vote is $12, the payoff for a wrong vote is -$52.) What is the expected value of his betting if he always says ‘yes’ if he is in a green room?
For every 20 times that Tony says ‘yes’, he wins 18 times (wins $12x18) and he loses twice (loses $52x2), consistent with his posterior. One average he wins $5.60 per bet , or $2.80 per night. (He says “yes” to the bet 1 out of every 2 nights, because that is the frequency with which he finds himself in a green room.) This is a steady money pump in the student’s favor.
The correct calculation for Case 1 is:
average payoff per bet = (probability of being right)x(payoff if right)+ (probability of being wrong)x(payoff if wrong) = .9x18+.1x-52 =5.6.
Case 2: Suppose that Tony doesn’t pocket the money, but instead the money is placed in a tip jar in the breezeway. Tony’s betting contributes $2.80 per night on average to the tip jar.
Case 3: Suppose there is nothing special about Tony, and all the students get to make bets. They will all make bets when they wake in green rooms, and add $2.80 per night to the tip jar on average. Collectively, the students add $56 per night to the tip jar on average. (If you think about it a minute, you will see that they add $216 to the tip jar on nights that they are assigned to hotel Green and lose $104 on nights that they are assigned to hotel Red.) If the money is distributed back to the students, they each are making $2.80 per night, the same steady money pump in their favor that Tony took advantage of in Case 1.
Case 4: Now consider the case described in the OP. We already understand that the students will vote “yes” if they wake in a green room and that they expect to make money doing so. Now the rules are going to change, however, so that when all the green roomers unanimously vote “yes”, $12 are added to the tip jar if they are correct and $52 are subtracted if they are wrong. Since the students are assigned to Hotel Green half the time and to Hotel Red half the time, on average the tip jar loses $20 every night. Suddenly, the students are losing $1 a night!
Each time a student votes correctly, it is because they are all in Hotel Green, as per the initial set up of the problem in the OP. So all 18 green roomer votes are correct and collectively earn $12 for that night. The payoff is $12/18 per correct vote. Likewise, the payoff per wrong vote is -$52/2.
So the correct calculation for case 4 is as follows:
average payoff per bet = (probability of being right)x(payoff if right)+ (probability of being wrong)x(payoff if wrong) = .9x(18/12)+.1x(-52/2) = -2.
So in conclusion, in the OP problem, the green roomer must recognize that he is dealing with case #4 and not Case #1, in which the payoff is different (but not the posterior).
I believe both of your computations are correct, and the fallacy lies in mixing up the payoff for the group with the payoff for the individual - which the frame of the problem as posed does suggest, with multiple identities that are actually the same person. More precisely, the probabilities for the individual are 90/10 , but the probabilities for the groups are 50/50, and if you compute payoffs for the group (+$12/-$52), you need to use the group probabilities. (It would be different if the narrator ("I") offered the guinea pig ("you"... (read more)