When we form hypotheticals, they must use entirely consistent and clear language, and avoid hiding complicated operations behind simple assumptions. In particular, with respect to decision theory, hypotheticals must employ a clear and consistent concept of free will, and they must make all information available to the theorizer available to the decider in the question. Failure to do either of these can make a hypothetical meaningless or self-contradictory if properly understood.
Newcomb's problem and the the Smoking Lesion fail to do both. I will argue that hidden assumptions in both problems imply internally contradictory concepts of free will, and thus both hypotheticals are incomprehensible and irrelevant when used to contradict decision theories.
And I'll do it without math or programming! Metatheory is fun.
Newcomb's problem, insofar as it is used as a refutation of causal decision theory, relies on convenient ignorance and a paradoxical concept of free will, though it takes some thinking to see why, because the concept of naive free will is such an innate part of human thought. In order for Newcomb's to work, there must exist some thing or set of things ("A") that very closely (even perfectly) link "Omega predicts Y-boxing" with "Decider takes Y boxes." If there is no A, Omega cannot predict your behaviour. The existence of A is a fact necessary for the hypothetical and the decision maker should be aware of it, even if he doesn't know anything about how A generates a prediction.
Newcomb's problem assumes two contradictory things about A. It assumes that, for the purpose of Causal Decision Theory, A is irrelevant and completely separated from your actual decision process; it assumes you have some kind of free will such that you can decide to two-box without this decision having been reflected in A. It also assumes that, for purposes of the actual outcome, A is quite relevant; if you decided to two-box, your decision will have been reflected in A. This contradiction is the reason the problem seems complicated. If CDT were allowed to consider A, as it should be, it would realize:
(B), "I might not understand how it works, but my decision is somehow bound to the prediction in such a way that however I decide will have been predicted. Therefore, for all intents and purposes, even though my decision feels free, it is not, and, insofar as it feels free, deciding to one-box will cause that box to be filled, even if I can't begin to comprehend *how*."
"I should one-box" follows rather clearly from this. If B is false, and your decision is *not* bound to the prediction, then you should two-box. To let the theorizer know that B is true, but to forbid the decider from using such knowledge is what makes Newcomb's being a "problem." Newcomb's assumes that CDT operates with naive free will. It also assumes that naive free will is false and that Omega accurately employs purely deterministic free will. It is this paradox of simultaneously assuming naive free will *and* deterministic will that makes Necomb's problem a problem. CDT does not appear to be bound to assume naive free will, and therefore it seems capable of treating your "free" decision as causal, which it seems that it functionally must be.
The Smoking Lesion problem relies on the same trick in reverse. There is, by necessary assumption, some C such that C causes smoking and cancer, but smoking does not actually cause cancer. The decider is utterly forbidden from thinking about what C is and how C might influence the decision under consideration. The *decision to smoke* very, very strongly predicts *being a smoker.*1Indeed, given that there is no question of being able to afford or find cigarettes, the outcome of the decision to smoke is precisely what C predicts. The desire to smoke is essential to the decision to smoke - under the hypothetical, if there were no desire, the decider would always decide not to smoke; if there is a desire and a low enough risk of cancer, the decider will always decide to smoke. Thus, the desire appears to correspond significantly (perhaps perfectly) with C, but Evidential Decision Theory is arbitrarily prevented from taking this into account. This is despite the fact that C is so well understood that we can say with absolute certainty that the correlation between smoking and cancer is completely explained by it.
The problem forces EDT to assume that C operates deterministically on the decision, and that the decision is naively free. It requires that the decision to smoke both is and is not correlated with the desire to smoke - if it were correlated, EDT would consider this and significantly adjust the odds of getting cancer conditional on deciding to smoke *given* that there is a desire to smoke. Forcing the decider to assume a paradox proves nothing, so TSL fails to refute a evidential decision theory that actually uses all of the evidence given to it.
Both TSL and Newcomb's exploit our intuitive understanding of free will to assume paradoxes, then uses these unrecognized paradoxes to undermine a decision strategy. As these problems force the decider to secretly assume a paradox, it is little surprise that they generate convoluted and problematic outputs. This suggests that the problem lies not in these decision theories, but in the challenge of fully and accurately translating our language to our decision maker's decision theory.
Newcomb's, TSL, Counterfactual Mugging, and the Absent-Minded Driver all have another larger, simpler problem, but it is practical rather than conceptual, so I'll address it in a subsequent post.
1 - In the TSL version EY used in the link I provided, C is assumed to be "a gene that causes a taste for cigarettes." Since the decider already *knows* they have a taste for cigarettes, Evidential Decision Theory should take this into account. If it does, it should assume that C is present (or present with high probability), and then the decision to smoke is obvious. Thus, the hypothetical I'm addressing is a more general version of TSL where C is not specified, only the existence of an acausal correlation is assumed.
Newcomb's problem can be modelled by using the correlated decision principle and viewing yourself and Omega's simulation of you as being the correlated decision-makers.
This modification may even make it amenable to CDT, but I'm not sure about that.