There seems to be some confusion on how to deal with correlated decision making - such as with absent-minded drivers and multiple copies of yourself; any situation in which many agents will allreach the same decision. Building on Nick Bostrom's division-of-responsibility principle mentioned in Outlawing Anthropics, I propose the following correlated decision principle:
CDP: If you are part of a group of N individuals whose decision is perfectly correlated, then you should reason as if you had a 1/N chance of being the dictator of the group (in which case your decision is applied to all) and a (N-1)/N chance of being a dictatee (in which case your decision is ignored).
What justification could there be to this principle? A simple thought experiment: imagine if you were one of N individuals who had to make a decision in secret. One of the decisions is opened at random, the others are discarded, and each person has his mind modified to believe that what was decided was in fact what they decided. This process is called a "dictator filter".
If you apply this dictator filter any situation S, then in "S + dictator filter", you should reason as in the CDP. If you apply it to perfectly correlated decision making, however, then the dictator filter changes nothing at all to anyone's decision - hence we should treat "perfectly correlated" as isomorphic to "perfectly correlated + dictator filter", which establishes the CDP.
Used alongside the SIA, this solves many puzzles on this blog, without needing advanced decision theory.
For instance, the situation in Outlawing Anthropics is simple: the SIA implies the 90% view, giving you a 90% chance of being in a group of 18, and a 10% of being in a group of two. Then you were offered a deal in which $3 is stolen from the red rooms, and $1 given to the green rooms. The initial expected gain from accepting the deal was -$20; the problem came that when you woke up in a green room, you were far more likely to be in the group of 18, giving an expected gain of +$5.60. The CDP cancels out this effect, returning you to an expected individual gain of -$2, and a global expected gain of -$20.
The Absent-Minded driver problem is even more interesting, and requires a more subtle reasoning. The SIA implies that if your probability of continuing is p, then the chance that you are at the first intersection is 1/(1+p), while the chance that you are at the second is p/(1+p). Using these number, it appears that your expected gain is [p2 + 4(1-p)p + p(p+4(1-p))]/(1+p) which is 2[p2 + 4(1-p)p]/(1+p).
If you were the dictator, deciding the behaviour at both intersections, your expected gain would be 1+p times this amount, since the driver at the first intersection exist with probability 1, while that at the second exists with probability p. Since there are N=2 individuals, the CDP thus cancels both the 2 and the (1+p) factors, returning the situation to the expected gain of p2 -4(1-p)p, maximised at p = 2/3.
The CDP also solves the issues in my old Sleeping Beauty problem.
Do you mean "1+p" instead of "2-p" at the end there? If not, where does "2-p" come from?
Why do you say that (N=2), since the number of individuals is actually random? If you EXIT at X, then the individual at Y doesn't exist, right?
Do you think CDP can be formalized sufficiently so that it can be applied mechanically after transforming a decision problem into some formal representation (like a decision tree, or world program as in UDT1)? The way it is stated now, it seems too ambiguous to say what is the solution to a given problem under CDP.
I now think I've got a fomalisation that works; I'll put it up in a subsequent post.