You wake up in a hospital bed, remembering nothing of your past life. A stranger sits beside the bed, smiling. He says:
"I happen to know an amusing story about you. Many years ago, before you were born, your parents were arguing about how many kids to have. They settled on flipping a coin. If the coin came up heads, they would have one child. If it came up tails, they would have ten."
"I will tell you which way the coin came up in a minute. But first let's play a little game. Would you like a small piece of chocolate, or a big tasty cake? There's a catch though: if you choose the cake, you will only receive it if you're the only child of your parents."
Stuart Armstrong has proposed a solution to this problem (see the fourth model in his post). Namely, you switch to caring about the average that all kids receive in your branch. This doesn't change the utility all kids get in all possible worlds, but makes the problem amenable to UDT, which says all agents would have precommitted to choosing cake as long as it's better than two pieces of chocolate (the first model in Stuart's post).
But.
Creating two physically separate worlds with probability 50% should be decision-theoretically equivalent to creating them both with probability 100%. In other words, a correct solution should still work if the coin is quantum. In other words, the problem should be equivalent to creating 11 kids, offering them chocolate or cake, and giving cake only if you're the first kid. But would you really choose cake in this case, knowing that you could get the chocolate for certain? What if there were 1001 kids? This is a hard bullet to swallow, and it seems to suggest that Stuart's analysis of his first model may be incorrect.
I await comments from Stuart or anyone else who can figure this out.
Correct as in uniquely fulfills the desiderata of probability theory, on which the whole thing can be based. Ooh, found a link, I didn't know that was online. Particularly important for these purposes is the principle that says that states with identical information should be assigned identical probabilities. You just know that you are one of the people in the problem, which breaks the symmetry between the two coin-flip outcomes (since there are different numbers of people depending on the outcome), but it creates a symmetry between all the states specified by "you are one of the people in this problem."
It's not that I'm saying it's wrong to approach this as a decision problem. Just that ordinary probability applies fine in this case. If you get a different result with a decision theory than with bayesian probability, though, that is bad. Bad in the sense of provably worse by the measure of expected utility, unless circumstances are very extreme.
We're still talking past each other, I'm afraid.
What's "expected utility" in situations with indexical uncertainty? If you take the "expectation" according to an equal weighting of all indistinguishable observer-moments, isn't your reasoning circular?
Also I'm interested in hearing your response to rwallace's scenario, which seems to show that assigning equal probabilities to indistinguishable observer-moments leads to time-inconsistency.