I recently had a conversation with a staunch defender of EDT who maintained that EDT gives the right answer in the Smoker’s Lesion and even Evidential Blackmail. I came up with the following, even more counterintuitive, thought experiment:
--
By doing research, you've found out that there is either
(A) only one universe or
(B) a multiverse.
You also found out that the cosmological theory has a slight influence (via different physics) on how your brain works. If (A) holds, you will likely decide to give away all your money to random strangers on the street; if there is a multiverse, you will most likely not do that. Of course, causality flows in one direction only, i.e. your decision does not determine how many universes there are.
Suppose you have a very strong preference for (A) (e.g. because a multiverse would contain infinite suffering) so that it is more important to you than your money.
Do you give away all your money or not?
--
This is structurally equivalent to the Smoker's lesion, but what's causing your action is the cosmological theory, not a lesion or a gene. CDT, TDT, and UDT would not give away the money because there is no causal (or acausal) influence on the number of universes. EDT would reason that giving the money away is evidence for (A) and therefore choose to do so.
Apart from the usual “managing the news” point, this highlights another flaw in EDT: its presumptuousness. The EDT agent thinks that her decision spawns or destroys the entire multiverse, or at least reasons as if. In other words, EDT acts as if it affects astronomical stakes with a single thought.
I find this highly counterintuitive.
What makes it even worse is that this is not even a contrived thought experiment. Our brains are in fact shaped by physics, and it is plausible that different physical theories or constants both make an agent decide differently and make the world better or worse according to one’s values. So, EDT agents might actually reason in this way in the real world.
I have not seen a valid derivation of smoking from Eliezer's TDT, so I am not saying that TDT is inconsistent. I suspect that TDT actually implies not smoking. The point of the generalization is that any decision theory that answers the question will say both [one-box, don't smoke] or [two-box, smoke]. Causal decision theory does answer the question: it says that you aren't responsible for C or D, and you prefer A to B, so do A. And non-causal decision theories in general will say do B, because you prefer B & D to A & C; I think this is probably true of TDT as well.
I agree with you that the reason some people want different answers is because of ideas about the causality there. When we previously had these discussions, people constantly said things like, "if the lesion has a 100% correlation, then you can't make a choice anyway," and things like that, which is an intuition about the causality. But obviously that is not true except in the sense that if Omega has a 100% correlation and makes a decision, you can longer make a choice in Newcomb either. In fact, I think that any time you add something relevant to the decision to the general case I presented, you can construct something parallel for the Smoking Lesion and for Newcomb.
What is included in the Newcomb case might depend on the particular case: if someone is absolutely determined to one-box no matter what the circumstances, then the state of his brain alone might be X. And this is really no different from the lesion, since if we are to imagine the lesion case working in real life, we need to include the relationship between the physical lesion and the rest of the brain. So the state of the brain might be sufficient for both, at least in some cases.
"The way in which X causes C or D explicitly involves my choosing A or B, whereas in the smoking lesion case neither of those is anything like true." It does matter how the lesion gets correlated with smoking, just as it matters how Newcomb's prediction gets correlated with one or two-boxing. This is why I prefer to discuss the case of 100% correlation first: because in this case, they have to be correlated in the right way in both cases.
Suppose there is some correlation but it is not 100%. There will be parallel cases for Smoking Lesion and for Newcomb where the correlation is not the right kind:
Smoking Lesion. Suppose the lesion is correlated with smoking only by causing you to have a desire for smoking. Then someone can say, "I have a strong desire to smoke. That means I probably have the lesion. But if I smoke, it doesn't mean my desire is any stronger, since I already have that desire; so I might as well smoke." Note that even evidential decision theory recommends smoking here, at least given that you can directly take note of the condition of your desire; if you can validly argue, "if I actually smoke, that suggests my desire was a bit stronger than I realized, so I will also be more likely to have the lesion," that may change the situation (I'll discuss this below).
Necomb. Suppose Omega's prediction is correlated with one-boxing only by taking note of previous statements a person has made and determining whether most of them support one-boxing or two-boxing. Then someone can say, "Most of my statements in the past have supported one-boxing. So the million is probably in the box. So I might as well take both boxes. I will probably still get the million, since this will not affect the past statements that Omega is judging from." Even evidential decision theory will recommend this course of action, and I think even Eliezer would agree that if we know for a fact that Omega is judging in this way, and we directly know the condition of our past statements, two-boxing is appropriate. But again, it is different if one can validly argue, "if I take two boxes now, that will likely mean my promotion of one-boxing in the past wasn't quite as strong as I thought, so the million will be less likely to be there." This kind of uncertainty may again change the situation in the same way as uncertainty about my desire above.
Suppose the correlation is not 100%, but we have one of the conditional situations mentioned above: where if I do A, I actually do increase my expectation of C, and if I do B, I actually do increase my expectation of D. This is the right kind of correlation. And in this case, evidential decision theory recommends doing B in both cases, and I think the reasons are parallel for Newcomb and for Smoking lesion. [Edit: obviously if the correlation is not 100% it will depend on the particular correlation and on concrete utilities; I ignored this for simplicity.]
But let's consider another case where the correlation isn't the right kind. Again, the lesion causes smoking by causing desire. And I am uncertain of exactly how strong my desire is, but I know I have some desire. Then it would appear at first that evidential decision theory recommends not smoking. But the situation will be changed if I can validly argue, "I am going to decide using some rigid decision theory that always recommends the same course of action in this situation. And this decision theory recommends smoking. This will imply in no way that my desire was any stronger, since it wasn't the strength of the desire that led to it, but this rigid decision theory." In that case, choosing to smoke will not increase your expectation that you have the lesion. And therefore even evidential decision theory will recommend smoking in this case.
Now it might seem difficult to construct a parallel for Newcomb here, and this is getting at what appears different to you: if someone says, "I am going to use a decision theory which rigidly recommends two-boxing," that will suggest e.g. that even his previous statements promoting one-boxing were not as strong as they might have been, and therefore he should increase his expectation of not getting the million. In other words, we have the "right" kind of correlation almost by definition, because "the way in which X causes C or D explicitly involves my choosing A or B."
But the same thing can actually happen in the smoking case. If we say, "why are you using a decision theory which rigidly recommends smoking?" the answer might well be the (somewhat uncertain) strength of your desire to smoke. And to the degree that it is, whether you use this decision theory or some other will affect your actual expectation of having the lesion. And in this case, you should choose to use a decision theory which recommends not smoking. If the lesion is allowed to affect how I make my choice -- which is absolutely necessary in the 100% case, and which is possible even in lower correlation cases -- then the parallel between the Smoking Lesion and Newcomb is restored.
How confident are you that you understand TDT better than Eliezer does? Because he seems to think that TDT implies smoking.
This looks to me like another thing tha... (read more)