My paper with my Ph.D. advisor Vince Conitzer titled "Extracting Money from Causal Decision Theorists" has been formally published (Open Access) in The Philosophical Quarterly. Probably many of you have seen either earlier drafts of this paper or similar arguments that others have independently given on this forum (e.g., Stuart Armstrong posted about an almost identical scenario; Abram Demski's post on Dutch-Booking CDT also has some similar ideas) and elsewhere (e.g., Spencer (forthcoming) and Ahmed (unpublished) both make arguments that resemble some points from our paper).
Our paper focuses on the following simple scenario which can be used to, you guessed it, extract money from causal decision theorists:
Adversarial Offer: Two boxes, and , are on offer. A (risk-neutral) buyer may purchase one or none of the boxes but not both. Each of the two boxes costs . Yesterday, the seller put in each box that she predicted the buyer not to acquire. Both the seller and the buyer believe the seller's prediction to be accurate with probability .
At least one of the two boxes contains money. Therefore, the average box contains at least in (unconditional) expectation. In particular, at least one of the two boxes must contain at least in expectation. Since CDT doesn't condition on choosing box when assigning an expected utility to choosing box , the CDT expected utility of at least one of the two boxes is at least . Thus, CDT agents buy one of the boxes, to the seller's delight.
Most people on this forum are probably already convinced that (orthodox, two-boxing) CDT should be rejected. But I think the Adversarial Offer is one of the more convincing "counterexamples" to CDT. So perhaps the scenario is worth posing to your pro-CDT friends, and the paper worth sending to your pro-academic peer review, pro-CDT friends. (Relating their responses to me would be greatly appreciated – I am quite curious what different causal decision theorists think about this scenario.)
Yes, you are right. Sorry.
Okay, it probably isn't a contradiction, because the situation "Buyer writes his decision and it is common knowledge that an hour later Seller sneaks a peek into this decision (with probability 0.75) or into a random false decision (0.25). After that Seller places money according to the decision he saw." seems similar enough and can probably be formalized into a model of this situation.
You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper. I do that because it looks really suspicious to me for the following reasons:
So, while my previous attempts at finding error in your paper failed pathetically, I'm still suspicious, so I'll give it another shot.
When you argue that Buyer should buy one of the boxes, you assume that Buyer knows the probabilities that Seller assigned to Buyer's actions. Are those probabilities also a part of common knowledge? How is that possible? If you try to do the same in Newcomb's problem, you will get something like "Omniscient predictor predicts that player will pick the box A (with probability 1); player knows about that; player is free to pick between A and both boxes", which seem to be a paradox.