The source is here. I'll restate the problem in simpler terms:
You are one of a group of 10 people who care about saving African kids. You will all be put in separate rooms, then I will flip a coin. If the coin comes up heads, a random one of you will be designated as the "decider". If it comes up tails, nine of you will be designated as "deciders". Next, I will tell everyone their status, without telling the status of others. Each decider will be asked to say "yea" or "nay". If the coin came up tails and all nine deciders say "yea", I donate $1000 to VillageReach. If the coin came up heads and the sole decider says "yea", I donate only $100. If all deciders say "nay", I donate $700 regardless of the result of the coin toss. If the deciders disagree, I don't donate anything.
First let's work out what joint strategy you should coordinate on beforehand. If everyone pledges to answer "yea" in case they end up as deciders, you get 0.5*1000 + 0.5*100 = 550 expected donation. Pledging to say "nay" gives 700 for sure, so it's the better strategy.
But consider what happens when you're already in your room, and I tell you that you're a decider, and you don't know how many other deciders there are. This gives you new information you didn't know before - no anthropic funny business, just your regular kind of information - so you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying "yea" gives 0.9*1000 + 0.1*100 = 910 expected donation. This looks more attractive than the 700 for "nay", so you decide to go with "yea" after all.
Only one answer can be correct. Which is it and why?
(No points for saying that UDT or reflective consistency forces the first solution. If that's your answer, you must also find the error in the second one.)
Okay. If that is indeed the intention, then I declare this an anthropic problem, even if it describes itself as non-anthropic. It seems to me that anthropic reasoning was never fundamentally about fuzzy concepts like "updating on consciousness" or "updating on the fact that you exist" in the first place; indeed, I've always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it's about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge. In this problem, if we assume that all deciders are perfectly correlated, then (I predict) the solution won't be any easier than just answering it for the case where all the deciders are copies of the same person.
(Though I'm still going to try to solve it.)
Sounds right, if you unpack "implied by its state of knowledge" to not mean "only consider possible worlds consistent with observations". Basically, anthropic reasoning is about logical (agent-provable even) uncer... (read more)