With all the exotic decision theories floating around here, it doesn't seem like anyone has tried to defend boring old evidential decision theory since AlexMennen last year. So I thought I'd take a crack at it. I might come off a bit more confident than I am, since I'm defending a minority position (I'll leave it to others to bring up objections). But right now, I really do think that naive EDT, the simplest decision theory, is also the best decision theory.
Everyone agrees that Smoker's lesion is a bad counterexample to EDT, since it turns out that smoking actually does cause cancer. But people seem to think that this is just an unfortunate choice of thought experiment, and that the reasoning is sound if we accept its premise. I'm not so convinced. I think that this "bad example" provides a pretty big clue as to what's wrong with the objections to EDT. (After all, does anyone think it would have been irrational to quit smoking, based only on the correlation between smoking and cancer, before randomized controlled trials were conducted?) I'll explain what I mean with the simplest version of this thought experiment I could come up with.
Suppose that I'm a farmer, hoping it will rain today, to water my crops. I know that the probability of it having rained today, given that my lawn is wet, is higher than otherwise. And I know that my lawn will be wet, if I turn my sprinklers on. Of course, though it waters my lawn, running my sprinklers does nothing for my crops out in the field. Making the ground wet doesn't cause rain; it's the other way around. But if I'm an EDT agent, I know nothing of causation, and base my decisions only on conditional probability. According to the standard criticism of EDT, I stupidly turn my sprinklers on, as if that would make it rain.
Here is where I think the criticism of EDT fails: how do I know, in the first place, that the ground being wet doesn't cause it to rain? One obvious answer is that I've tried it, and observed that the probability of it raining on a given day, given that I turned my sprinklers on, isn't any higher than the prior probability. But if I know that, then, as an evidential decision theorist, I have no reason to turn the sprinklers on. However, if all I know about the world I inhabit are the two facts: (1) the probability of rain is higher, given that the ground is wet, and (2) The probability of the ground being wet is higher, given that I turn the sprinklers on - then turning the sprinklers on really is the rational thing to do, if I want it to rain.
This is more clear written symbolically. If O is the desired Outcome (rain), E is the Evidence (wet ground), and A is the Action (turning on sprinklers), then we have:
- P(O|E) > P(O), and
- P(E|A) > P(E)
(In this case, A implies E, meaning P(E|A) = 1)
It's still possible that P(O|A) = P(O). Or even that P(O|A) < P(O). (For example, the prior probability of rolling a 4 with a fair die is 1/6. Whereas the probability of rolling a 4, given that you rolled an even number, is 1/3. So P(4|even) > P(4). And you'll definitely roll an even number if you roll a 2, since 2 is even. So P(even|2) > P(even). But the probabilty of rolling a 4, given that you roll a 2, is zero, since 4 isn't 2. So P(4|2) < P(4) even though P(4|even) > P(4) and P(even|2) > P(even).) But in this problem, I don't know P(O|A) directly. The best I can do is guess that, since A implies E, therefore P(O|A) = P(O|E) > P(O). So I do A, to make O more likely. But if I happened to know that P(O|A) = P(O), then I'd have no reason to do A.
Of course, "P(O|A) = P(O)" is basically what we mean, when we say that the ground being wet doesn't cause it to rain. We know that making the ground wet (by means other than rain) doesn't make rain any more likely, either because we've observed this directly, or because we can infer it from our model of the world built up from countless observations. The reason that EDT seems to give the wrong answer to this problem is because we know extra facts about the world, that we haven't stipulated in the problem. But EDT gives the correct answer to the problem as stated. It does the best it can do (the best anyone could do) with limited information.
This is the lesson we should take from Smoker's lesion. Yes, from the perspective of people 60 years ago, it's possible that smoking doesn't cause cancer, and rather a third factor predisposes people to both smoking and cancer. But it's also possible that there's a third factor which does the opposite: making people smoke and protecting them from cancer - but smokers are still more likely to get cancer, because smoking is so bad that it outweighs this protective effect. In the absense of evidence one way or the other, the prudent choice is to not smoke.
But if we accept the premise of Smoker's lesion: that smokers are more likely to get cancer, only because people genetically predisposed to like smoking are also genetically predisposed to develop cancer - then EDT still gives us the right answer. Just as with the Sprinkler problem above, we know that P(O|E) > P(O), and P(E|A) > P(E), where O is the desired outcome of avoiding cancer, E is the evidence of not smoking, and A is the action of deciding to not smoke for the purpose of avoiding cancer. But we also just happen to know, by hypothesis, that P(O|A) = P(O). Recognizing A and E as distinct is key, because one of the implications of the premise is that people who stop smoking, despite enjoying smoking, fair just as badly as life-long smokers. So the reason that you choose to not smoke matters. If you choose to not smoke, because you can't stand tobacco, it's good news. But if you choose to not smoke to avoid cancer, it's neutral news. The bottom line is that you, as an evidential decision theorist, should not take cancer into account when deciding whether or not to smoke, because the good news that you decided to not smoke, would be cancelled out by the fact that you did it to avoid cancer.
If this is starting to sound like the tickle defense, rest assured that there is no way to use this kind of reasoning to justify defecting on the Prisoner's dilemma or two-boxing on Newcomb's problem. The reason is that, if you're playing against a copy of yourself in Prisoner's dilemma, it doesn't matter why you decide to do what you do. Because, whatever your reasons are, your duplicate will do the same thing for the same reasons. Similarly, you only need to know that the predictor is accurate in Newcomb's problem, in order for one-boxing to be good news. The predictor might have blind spots that you could exploit, in order to get all the money. But unless you know about those exceptions, your best bet is to one-box. It's only in special cases that your motivation for making a decision can cancel out the auspiciousness of the decision.
The other objection to EDT is that it's temporally inconsistent. But I don't see why that can't be handled with precommitments, because EDT isn't irreparably broken like CDT is. A CDT agent will one-box on Newcomb's problem, only if it has a chance to precommit before the predictor makes its prediction (which could be before the agent is even created). But an EDT agent one-boxes automatically, and pays in Counterfactual Mugging as long as it has a chance to precommit before it finds out whether the coin came up heads. One of the first things we should expect a self-modifying EDT agent to do, is to make a blanket precommitment for all such problems. That is, it self-modifies in such a way that the modification itself is "good news", regardless of whether the decisions it's precommitting to will be good or bad news when they are carried out. This self-modification might be equivalent to designing something like an updateless decision theory agent. The upshot, if you're a self-modifying AI designer, is that your AI can do this by itself, along with its other recursive self-improvements.
Ultimately, I think that causation is just a convenient short-hand that we use. In practice, we infer causal relations by observing conditional probabilities. Then we use those causal relations to inform our decisions. It's a great heuristic, but we shouldn't lose sight of what we're actually trying to do, which is to choose the option such that the probability of a good outcome is highest.
I've found that, in practice, most versions of EDT are underspecified, and people use their intuitions to fill the gaps in one direction or the other.