There seems to be some confusion on how to deal with correlated decision making - such as with absent-minded drivers and multiple copies of yourself; any situation in which many agents will allreach the same decision. Building on Nick Bostrom's division-of-responsibility principle mentioned in Outlawing Anthropics, I propose the following correlated decision principle:
CDP: If you are part of a group of N individuals whose decision is perfectly correlated, then you should reason as if you had a 1/N chance of being the dictator of the group (in which case your decision is applied to all) and a (N-1)/N chance of being a dictatee (in which case your decision is ignored).
What justification could there be to this principle? A simple thought experiment: imagine if you were one of N individuals who had to make a decision in secret. One of the decisions is opened at random, the others are discarded, and each person has his mind modified to believe that what was decided was in fact what they decided. This process is called a "dictator filter".
If you apply this dictator filter any situation S, then in "S + dictator filter", you should reason as in the CDP. If you apply it to perfectly correlated decision making, however, then the dictator filter changes nothing at all to anyone's decision - hence we should treat "perfectly correlated" as isomorphic to "perfectly correlated + dictator filter", which establishes the CDP.
Used alongside the SIA, this solves many puzzles on this blog, without needing advanced decision theory.
For instance, the situation in Outlawing Anthropics is simple: the SIA implies the 90% view, giving you a 90% chance of being in a group of 18, and a 10% of being in a group of two. Then you were offered a deal in which $3 is stolen from the red rooms, and $1 given to the green rooms. The initial expected gain from accepting the deal was -$20; the problem came that when you woke up in a green room, you were far more likely to be in the group of 18, giving an expected gain of +$5.60. The CDP cancels out this effect, returning you to an expected individual gain of -$2, and a global expected gain of -$20.
The Absent-Minded driver problem is even more interesting, and requires a more subtle reasoning. The SIA implies that if your probability of continuing is p, then the chance that you are at the first intersection is 1/(1+p), while the chance that you are at the second is p/(1+p). Using these number, it appears that your expected gain is [p2 + 4(1-p)p + p(p+4(1-p))]/(1+p) which is 2[p2 + 4(1-p)p]/(1+p).
If you were the dictator, deciding the behaviour at both intersections, your expected gain would be 1+p times this amount, since the driver at the first intersection exist with probability 1, while that at the second exists with probability p. Since there are N=2 individuals, the CDP thus cancels both the 2 and the (1+p) factors, returning the situation to the expected gain of p2 -4(1-p)p, maximised at p = 2/3.
The CDP also solves the issues in my old Sleeping Beauty problem.
I didn't check your application of CDP to the problems, but I think you erred at the beginning:
If your decisions are perfectly correlated (I assume that means they're all the same), then you are deciding for the group, because you make the same decision as everyone else. So you should treat it as a 100% chance of being the dictator of the group.
Wouldn't it also mean that we should treat "perfectly correlated" as isomorphic to "uncorrelated + dictator filter", since you always believe your vote determined the outcome?
(Again, I don't know how this affects your application of it.)
By the way, how would your application of CDP/SIA to the Absent-minded Driver problem take into account additional evidence fed to you about what intersection you're at? (Say, someone shows you something that amplifies the odds of being at X by a factor of [aka has a Bayes factor/likelihood ratio of] L.)
I think my extended set-up can deal with that, too - I'll write it up in a subsequent post.