The source is here. I'll restate the problem in simpler terms:
You are one of a group of 10 people who care about saving African kids. You will all be put in separate rooms, then I will flip a coin. If the coin comes up heads, a random one of you will be designated as the "decider". If it comes up tails, nine of you will be designated as "deciders". Next, I will tell everyone their status, without telling the status of others. Each decider will be asked to say "yea" or "nay". If the coin came up tails and all nine deciders say "yea", I donate $1000 to VillageReach. If the coin came up heads and the sole decider says "yea", I donate only $100. If all deciders say "nay", I donate $700 regardless of the result of the coin toss. If the deciders disagree, I don't donate anything.
First let's work out what joint strategy you should coordinate on beforehand. If everyone pledges to answer "yea" in case they end up as deciders, you get 0.5*1000 + 0.5*100 = 550 expected donation. Pledging to say "nay" gives 700 for sure, so it's the better strategy.
But consider what happens when you're already in your room, and I tell you that you're a decider, and you don't know how many other deciders there are. This gives you new information you didn't know before - no anthropic funny business, just your regular kind of information - so you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying "yea" gives 0.9*1000 + 0.1*100 = 910 expected donation. This looks more attractive than the 700 for "nay", so you decide to go with "yea" after all.
Only one answer can be correct. Which is it and why?
(No points for saying that UDT or reflective consistency forces the first solution. If that's your answer, you must also find the error in the second one.)
I claim that the first is correct.
Reasoning: the Bayesian update is correct, but the computation of expected benefit is incomplete. Among all universes, deciders are "group" deciders nine times as often as they are "individual" deciders. Thus, while being a decider indicates you are more likely to be in a tails-universe, the decision of a group decider is 1/9th as important as the decision of an individual decider.
That is to say, your update should shift probability weight toward you being a group decider, but you should recognize that changing your mind is a mildly good idea 9/10 of the time and a very bad idea 1/10 of the time, and that these should balance to you NOT changing your mind. Since we know that half the time the decision is made by an individual, their decision to not change their mind must be as important as all the decisions of the collective the other half the time.
If the decision of a group decider is "1/9th as important", then what's the correct way to calculate the expected benefit of saying "yea" in the second case? Do you have in mind something like 0.9*1000/9 + 0.1*100/1 = 110? This doesn't look right :-(