I stumbled upon this paper by Andy Egan and thought that its main result should be shared. We have the Newcomb problem as counterexample to CDT, but that can be dismissed as being speculative or science-fictiony. In this paper, Andy Egan constructs a smoking lesion counterexample to CDT, and makes the fascinating claim that one can construct counterexamples to CDT by starting from any counterexample to EDT and modifying it systematically.
The "smoking lesion" counterexample to EDT goes like this:
- There is a rare gene (G) that both causes people to smoke (S) and causes cancer (C). Susan mildly prefers to smoke than not to - should she do so?
EDT implies that she should not smoke (since the likely outcome in a world where she doesn't smoke is better than the likely outcome in a world where she does). CDT correctly allows her to smoke: she shouldn't care about the information revealed by her preferences.
But we can modify this problem to become a counterexample to CDT, as follows:
- There is a rare gene (G) that both causes people to smoke (S) and makes smokers vulnerable to cancer (C). Susan mildly prefers to smoke than not to - should she do so?
Here EDT correctly tells her not to smoke. CDT refuses to use her possible decision as evidence that she has the gene and tells her to smoke. But this makes her very likely to get cancer, as she is very likely to have the gene given that she smokes.
The idea behind this new example is that EDT runs into paradoxes whenever there is a common cause (G) of both some action (S) and some undesirable consequence (C). We then take that problem and modify it so that there is a common cause G of both some action (S) and of a causal relationship between that action and the undesirable consequence (S→C). This is then often a paradox of CDT.
It isn't perfect match - for instance if the gene G were common, then CDT would say not to smoke in the modified smoker's lesion. But it still seems that most EDT paradoxes can be adapted to become paradoxes of CDT.
Omega makes the prediction by looking at your state before setting the boxes. Let us call P the property of your state that is critical for his decision. It may be the whole microscopic state of your brain and environment, or it might be some higher-level property like "firm belief that one-boxing is the correct choice". In any case, there must be such a P, and it is from P that the causal arrow to the money in the box goes, not from your decision. Both your decision and the money in the box are correlated with P. Likewise, in my version of the smoking problem both your decision to smoke and cancer are correlated with the genetic lesion. So I think my version of the problem is isomorphic to Newcomb.