A technical report of the Future of Humanity Institute (authored by me), on why anthropic probability isn't enough to reach decisions in anthropic situations. You also have to choose your decision theory, and take into account your altruism towards your copies. And these components can co-vary while leaving your ultimate decision the same - typically, EDT agents using SSA will reach the same decisions as CDT agents using SIA, and altruistic causal agents may decide the same way as selfish evidential agents.
Anthropics: why probability isn't enough
This paper argues that the current treatment of anthropic and self-locating problems over-emphasises the importance of anthropic probabilities, and ignores other relevant and important factors, such as whether the various copies of the agents in question consider that they are acting in a linked fashion and whether they are mutually altruistic towards each other. These issues, generally irrelevant for non-anthropic problems, come to the forefront in anthropic situations and are at least as important as the anthropic probabilities: indeed they can erase the difference between different theories of anthropic probability, or increase their divergence. These help to reinterpret the decisions, rather than probabilities, as the fundamental objects of interest in anthropic problems.
Doesn't the isomorphism between them only hold if your SSA reference class is exactly the set of agents responsible for your decision?
(This question is also for Stuart -- by the way, thanks for writing this, the exposition of the divided responsibility idea was useful!)
In the anthropic decision theory formalism (see the link I posted in answer to Luke_A_Somers) SSA-like behaviour emerges from average utilitarianism (also selfish agents, but that's more complicated). The whole reference class complexity, in this context, is the complexity of deciding the class of agents that you average over.