I stumbled upon this paper by Andy Egan and thought that its main result should be shared. We have the Newcomb problem as counterexample to CDT, but that can be dismissed as being speculative or science-fictiony. In this paper, Andy Egan constructs a smoking lesion counterexample to CDT, and makes the fascinating claim that one can construct counterexamples to CDT by starting from any counterexample to EDT and modifying it systematically.
The "smoking lesion" counterexample to EDT goes like this:
- There is a rare gene (G) that both causes people to smoke (S) and causes cancer (C). Susan mildly prefers to smoke than not to - should she do so?
EDT implies that she should not smoke (since the likely outcome in a world where she doesn't smoke is better than the likely outcome in a world where she does). CDT correctly allows her to smoke: she shouldn't care about the information revealed by her preferences.
But we can modify this problem to become a counterexample to CDT, as follows:
- There is a rare gene (G) that both causes people to smoke (S) and makes smokers vulnerable to cancer (C). Susan mildly prefers to smoke than not to - should she do so?
Here EDT correctly tells her not to smoke. CDT refuses to use her possible decision as evidence that she has the gene and tells her to smoke. But this makes her very likely to get cancer, as she is very likely to have the gene given that she smokes.
The idea behind this new example is that EDT runs into paradoxes whenever there is a common cause (G) of both some action (S) and some undesirable consequence (C). We then take that problem and modify it so that there is a common cause G of both some action (S) and of a causal relationship between that action and the undesirable consequence (S→C). This is then often a paradox of CDT.
It isn't perfect match - for instance if the gene G were common, then CDT would say not to smoke in the modified smoker's lesion. But it still seems that most EDT paradoxes can be adapted to become paradoxes of CDT.
A solution to a decision problems has two components. The first component is reducing the problem from a natural language to math; the second component is running the numbers.
CDT's core is:
=\sum_jP(A%3EO_j)D(O_j))
Thus, when faced with a problem expressed in natural language, a CDTer needs to turn the problem into a causal graph (in order to do counterfactual reasoning correctly), and then turn that causal graph into an action which has the highest expected value.
I'm aware that Newcomb's Problem confuses other people, and so they'll make the wrong causal graph or forget to actually calculate P(A>Oj) when doing their expected value calculation. I make no defense of their mistakes, but it seems to me giving a special new name to not making mistakes is the wrong way to go about this problem.
That is the math for the notion "Calculate the expected utility of a counterfactual decision". That happens to be the part of the decision theory that is most trivial to formalize as an equation. That doesn't mean you can fundamentally replace all the other parts of the theory---change the actual meaning represented by those letters---and still be talking about the same decision theory.
The possible counterfactual outcomes being multiplied and summed within CDT are just not the same thing that you advocate ... (read more)