You should always cooperate with an identical copy of yourself in the prisoner's dilemma. This is obvious, because you and the copy will reach the same decision.

That justification implicitly assumes that you and your copy as (somewhat) antagonistic: that you have opposite aims. But the conclusion doesn't require that at all. Suppose that you and your copy were instead trying to ensure that one of you got maximal reward (it doesn't matter which). Then you should still jointly cooperate because (C,C) is possible, while (C,D) and (D,C) are not (I'm ignoring randomising strategies for the moment).

Now look at the Newcomb problem. You decision enters twice: once when you decide how many boxes to take, and once when Omega is simulating or estimating you to decide how much money to put in box B. You would dearly like your two "copies" (one of which may just be an estimate) to be out of sync - for the estimate to 1-box while the real you two-boxes. But without any way of distinguishing between the two, you're stuck with taking the same action - (1-box,1-box). Or, seeing it another way, (C,C).

This also makes the Newcomb problem into an anti-coordination game, where you and your copy/estimate try to pick different options. But, since this is not possible, you have to stick to the diagonal. This is why the Newcomb problem can be seen both as an anti-coordination game and a prisoners' dilemma - the differences only occur in the off-diagonal terms that can't be reached.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 5:23 PM

an anti-coordination game, where you and your copy/estimate try to pick different options

It feels to me as if calling this an anti-coordination game makes good sense when Omega is actually running a simulated copy of you but not when Omega is predicting by radically different means.

What is the relevant difference between the two situations?

That in one case there actually are two agents, and in the other there aren't.

I'm not sure how much difference this really makes to whether it's helpful to call the Newcomb scenario an anti-coordination game. That's partly because I'm not sure whether calling it that does anything much to help or hinder decision-making in any case :-).

There are certainly other situations in which a parallel difference seems really important.

Suppose you want to figure out whether I will enjoy having my legs eaten off by piranhas. (Spoiler: No.) You can do this in various ways. One way is to build a perfectly faithful model of my brain, body and environment, simulate the process really accurately, and observes the screams and writhings and so forth. Another is to think "hmm, that would involve having the flesh ripped from his bones, and that sort of thing is usually excruciatingly painful, and most people mostly don't like excruciating pain". I would feel very differently about these two decision processes that you might employ.

Ah yes, if copies suffer during the decision process, that is a relevant distinction. I will avoid dunking your copies into piranhas from this point on! ^_^

My main point, though, is that the decisions of sensible decision theories will be similar on the two problems - we expect defectors to two-box.

This is interesting because, by Rice theorem, it's impossible to have a general procedure to do semantic inspection, even if "general" is considered to be a simple recursively enumerable set (on the opposite side, structural inspection, e.g. how many states a machine has, is trivial).
This implies that "pain" is a structural property of the human brain instead of a semantic property. I wonder: is there a property of my mind that is inaccessible to inspection by a super-agent if not by emulation? Or are all my thoughts accessible because each is reflected in the structural changes of my brain chemical architecture?

This is obvious, because you and the copy will reach the same decision.

Only if you reject the idea of free will and there are no environmental microvariations.

That justification implicitly assumes that you and your copy as (somewhat) antagonistic

No, it doesn't need to be antagonistic, just independent. Also, it's part of the PD setup, not assumed by the twins justification.

Your main point is fair, though - this version of copy-of-you PD and Newcomb's problem if it's predictor is equivalent to copying you are similar.