# Anthropic Reasoning by CDT in Newcomb's Problem

4 14 March 2012 12:44AM

By orthonormal's suggestion, I take this out of comments.

Consider a CDT agent making a decision in a Newcomb's problem, in which Omega is known to make predictions by perfectly simulating the players. Assume further that the agent is capable of anthropic reasoning about simulations. Then, while making its decision, the agent will be uncertain about whether it is in the real world or in Omega's simulation, since the world would look the same to it either way.

The resulting problem has a structural similarity to the Absentminded driver problem1. Like in that problem, directly assigning probabilities to each of the two possibilities is incorrect. The planning-optimal decision, however, is readily available to CDT, and it is, naturally, to one-box.

Objection 1. This argument requires that Omega is known to make predictions by simulation, which is not necessarily the case.

Answer: It appears to be sufficient that the agent only knows that Omega is always correct. If this is the case, then a simulating-Omega and some-other-method-Omega are indistinguishable, so the agent can freely assume simulation.

[This is a rather shaky reasoning, I'm not sure it is correct in general. However, I hypothesise that whatever method Omega uses, if the CDT agent knows the method, it will one-box. It is only a "magical Omega" that throws CDT off.]

Objection 2. The argument does not work for the problems where Omega is not always correct, but correct with, say, 90% probability.

Answer: Such problems are underspecified, because it is unclear how the probability is calculated. [For example, Omega that always predicts "two-box" will be correct in 90% cases if 90% of agents in the population are two-boxers.] A "natural" way to complete the problem definition is to stipulate that there is no correlation between correctness of Omega's predictions and any property of the players. But this is equivalent to Omega first making a perfectly correct prediction, and then adding a 10% random noise. In this case, the CDT agent is again free to consider Omega a perfect simulator (with added noise), which again leads to one-boxing.

Objection 3. In order for the CDT agent to one-box, it needs a special "non-self-centered" utility function, which when inside the simulation would value things outside.

Answer: The agent in the simulation has exactly the same experiences as the agent outside, so it is the same self, so it values the Omega-offered utilons the same. This seems to be a general consequence of reasoning about simulations. Of course, it is possible to give the agent a special irrational simulation-fearing utility, but what would be the purpose?

Objection 4. CDT still won't cooperate in the Prisoner's Dilemma against a CDT agent with an orthogonal utility function.

1 Thanks to Will_Newsome for pointing me to this.

Sort By: Best
Comment author: 14 March 2012 02:17:47AM *  4 points [-]

Answer: It appears to be sufficient that the agent only knows that Omega is always correct. If this is the case, then a simulating-Omega and some-other-method-Omega are indistinguishable, so the agent can freely assume simulation.

This is false. If there is not a conscious simulation running, the agent will know he is not a simulation, and will two box.

Objection 2. The argument does not work for the problems where Omega is not always correct, but correct with, say, 90% probability.

As long as the probability is sufficiently high, and the agent is sufficiently uncertain of whether or not he is the simulation, it works fine.

Answer: The agent in the simulation has exactly the same experiences as the agent outside, so it is the same self, so it values the Omega-offered utilons the same.

If the agent is selfish, and his sense of identity is such that he doesn't consider the being he is a simulation of himself, the simulated self will not care about the non-simulated self.

I admit it does seem a bit weird to have a utility function that depends on something you have no way of knowing. It's not impossible, though.

Comment author: 14 March 2012 06:17:49PM *  1 point [-]

Objection 3. In order for the CDT agent to one-box, it needs a special "non-self-centered" utility function, which when inside the simulation would value things outside.

To expand on this, it's not inconsistent to have an agent that only cares about how many delicious gummi bears xe personally gets to eat, and definitely not about how many gummi bears any other copy of xerself gets to eat.

Then, even given 50-50 anthropic uncertainty about whether xe is the real-world self or the simulated self, xe will two-box:

50% chance of being in a world where all choices are pointless (I'm presuming that Omega will stop the simulation once xe makes xer choice), since the only consequence is whether someone else's opaque box is full of gummi bears or not.

50% chance of being in the real world, in which case the opaque box contains X gummies (where X=0 or 1000000), so two-boxing earns 1000+X gummies while one-boxing only earns X.

So two-boxing is still worth 500 expected gummies overall, in the CDT formulation.

Of course, a variant on quantum suicide could still lead to one-boxing here, but there are consistent theories of anthropics that reject quantum suicide. Also, Omega could pick a different color for the opaque box in the real and simulated worlds (without the agent knowing which is which), so that the real future agent can't be the successor of the simulated one...

Comment author: 14 March 2012 08:33:04PM 0 points [-]

Omega could pick a different color for the opaque box in the real and simulated worlds (without the agent knowing which is which), so that the real future agent can't be the successor of the simulated one...

Oh. I missed that. This would also break the similarity to Absentminded driver problem...

But no, this doesn't work, because Omega is known to always guess correctly, and there exist agents that one-box if the opaque box is red and two-box if it's blue. So, the simulation must be perfect.

Comment author: 15 March 2012 05:09:56AM 0 points [-]

It's still an almost-Newcomb problem that sane decision theories should pass.

Comment author: 14 March 2012 11:51:06AM *  1 point [-]

However, I hypothesise that whatever method Omega uses, if the CDT agent knows the method, it will one-box. It is only a "magical Omega" that throws CDT off.

I suspect that if you try to formalize the version of "CDT" that makes the above statement true, it will look suspiciously similar to our formulations of UDT.

Comment author: 14 March 2012 02:19:00PM *  0 points [-]

This is an interesting question. I believe, the "knowledgeable CDT" would be equivalent to TDT, but not UDT. Now that I understand better what UDT is, I think it is strictly stronger than CDT/TDT, since it is allowed to arrive at decisions by proving things indirectly, whereas CDT/TDT must always resort to an equivalent of either simulation (simulating the other agent) or reasoning by symmetry (which is equivalent to reasoning from being in a simulation, as in the OP).

For example, if I formalize the "knowledgeable version of CDT" as you suggested, then: knowing Omega's method is equivalent to having its source code. Then CDT using direct causality to arrive at decision would be equivalent to simulating Omega. If Omega itself is a simulator, then CDT could never stop. However, in this case CDT can arrive at decision using anthropic simulation reasoning = reasoning by symmetry, as in the OP. But if Omega is not a simulator, but some kind of a theorem prover, then CDT would be able to simulate it, in which case it will two-box and fail... Hmm, that was actually a problem for UDT as well...

Comment author: 14 March 2012 02:51:17PM *  0 points [-]

If the agent has more computing power than Omega and is stupid enough to go ahead and simulate Omega, that's called the ASP problem, and you're right that it's a problem for UDT too.

Comment author: 14 March 2012 06:05:44PM *  0 points [-]

After more thinking, it seems the problem is not in simulation as such, but in (1) having free will + (2) knowing the prediction beforehand. The only self-consistent solution is "two-box"...

Comment author: 14 March 2012 06:53:30PM *  2 points [-]

The only self-consistent solution is "two-box"...

Augh, gRR, at this rate you'll soon be making actual new progress, but only if you force yourself to be more thorough. As Eliezer's Quirrell said, "You must continue thinking". A good habit is to always try to push a little bit past the point where you think you have everything figured out.

Vladimir Nesov has just suggested that the agent might choose not to simulate the predictor, but instead make a decision quickly (using only a small fraction of the available resources) to give the predictor a chance at figuring out things about the agent. I don't know how to formalize this idea in general, but it looks like it might yield a nice solution to the ASP problem someday.

Comment author: 14 March 2012 07:38:19PM *  0 points [-]

It's interesting not being my past self and being able to understand that problem.

Because strategies based on simulation of the predictor are opaque to the predictor, while strategies based on high-level reasoning are transparent to the predictor, the problem is no longer just determined by the agent's final decisions - it's not in the same class as Newcomb's problem anymore. It's a computation-dependent problem, but it's not quite in the same class as a two box problem that rewards you for picking options alphabetically (the AlphaBeta problem :D).

I agree with Vladimir's idea that the UDT agent formalized in your original post might still be able to handle it without any extensions, if it finds a short proof that includes some gnarly self-reference (See note). The AlphaBeta problem, on the other hand, is unwinnable for any utility-maximizer without the ability to suspend its own utility-maximizing. This is interesting, because it seems like the ASP problem is also more "reasonable" than the AlphaBeta problem.

(note): As a sketch: The existence of a proof that one-boxing means maximum utility that is less than N is equivalent to both boxes being filled, and if no such proof exists, only one box is filled. If the proven-maximum-utility-meaning action is always taken, then the maximum available utility is when one box is taken and both boxes are full. The optimal action is always This proof is less than N. By the power vested in me by Loeb's theorem...

Comment author: 14 March 2012 10:50:26PM *  0 points [-]

the problem is no longer just determined by the agent's final decisions

Right.

It's interesting not being my past self and being able to understand that problem.

Congratulations :-) Now I'll do the thing that Wei usually does, and ask you if something specific in the problem description was tripping you up? How would you rephrase it to make your past self understand it faster?

Comment author: 14 March 2012 11:40:56PM 0 points [-]

How would you rephrase it to make your past self understand it faster?

Include a link to Wei Dai's analysis of the absentminded driver problem, with a short blurb explaining why your theorem-proving agent is like that and not like CDT, maybe. But that would have had only a faint hope of success :P

Comment author: 14 March 2012 07:21:27PM *  0 points [-]

Thanks you for the kind words! I do constantly make stupid mistakes because of not thinking enough...

Vladimir Nesov's idea is to remove (2), so that the agent won't know the prediction beforehand. But I wonder if it might still be possible to remove (1) - to allow the agent sometimes to know its decision beforehand without blowing up. I feel like a perpetuum mobile inventor here... but, I didn't see an actual proof that it's impossible... My latest attempt is here.

Comment author: 14 March 2012 07:31:42PM *  0 points [-]

Taboo "free will". An action is constructed depending on agent's knowledge, and therefore on circumstances that formed that knowledge. If this dependence can be characterized by action being consequentialist relative to agent's stated preference, then the action was chosen correctly. A consequentialist action is one that optimizes the values of its effect, and so the action must depend (via argmax) on the dependence of its effect on the action. If the action is chosen based on different considerations, it's no longer a consequentialist action.

So, for example, knowing what the action is going to be is not a problem, a problem would be if the action is chosen to fulfill the prediction, because that wouldn't be a consequentialist reason for an action.

Comment author: 14 March 2012 07:41:27PM 0 points [-]

I meant "free will" in the sense that the agent is not allowed to know (prove) its decision before it is made (as in "playing chicken with the universe" rule), as otherwise it becomes inconsistent. As far as I understand, "knowing what the action is going to be" is a problem. Isn't it?

Comment author: 14 March 2012 08:07:17PM 0 points [-]

Having knowledge of the decision lying around is not a problem, the problem is if it's used in construction of the decision itself in such a way that the resulting decision is not consequentialist. The diagonal rule allows breaking the dependence of your knowledge on your decision, if it hypothetically existed, so that it becomes easier to prove that the decision procedure produces a consequentialist decision.

Also, the decision can't be "inconsistent", as it's part of the territory. Agent's knowledge may be inconsistent, that is useless, but even then there is fact of the matter of what its gibbering self decides.

Comment author: 14 March 2012 08:18:59PM 0 points [-]

I meant agent (its proof system) becoming inconsistent, of course, not its decision. Bad wording on my part.

The problem, as I see it, is that the standard UDT agent (its proof system) is not allowed to prove that it will do a certain action (or that it will not do some action). Because then it will prove stupid counterfactuals, which will make it change its decision, which will make its proof wrong, which will make its proof system inconsistent.

I think this is a serious limitation. Maybe it is impossible to define well-behaved consequentialist agents without this limitation, but I didn't see an actual proof...

Comment author: 14 March 2012 07:29:33PM 0 points [-]

I don't understand yet... Could you explain in more detail how your proposal would work in the ASP problem?

Comment author: 14 March 2012 07:56:36PM *  0 points [-]

I wrote that comment before I saw ASP, but let's try:

def U():
box1 = 1000
box2 = (A() == 2) ? 0 : 1000000
return box2 + ((A_2() == 2) ? box1 : 0)

def A():
Forall i try to prove A()==A_i()
Let UX(X) be the function that results from the source of U() if all calls "A_i()", for which the system was able to prove A()==A_i() were changed to "eval(X)".
For some action p, prove that for any possible action q, UX("return q") <= UX("return p").
Return p.

Assume A_2() is a weak predictor, and A() is able to simulate it and know its prediction. Nevertheless, A() is still able to prove A()==A_2(), without getting a contradiction (which a regular UDT agent would get from the "playing chicken" rule). So the agent will prove that it will "one-box", and the weak predictor will prove the same.

Comment author: 14 March 2012 08:43:18PM *  0 points [-]

Interesting.

1) I can't really parse it because the first step mentions A_i. What if the problem statement is slightly obfuscated and doesn't contain any easily recognizable A_i? Can you rephrase your proposal in terms of trying to prove things about U and A?

2) Does your implementation of A use some proof length limit on the first step? On the second step? Or does it somehow stop early when the first proof is found?

Comment author: 14 March 2012 09:53:10PM *  0 points [-]

1) Yes, this is a problem. I don't have a good general solution, but as a working first step:

Step 0: enumerate all parameterless function calls within the code of U. Call them A_i, for i=1..N.

[ETA: I assume the code is in a pure functional language here. Haskell rules! :)]

2) Computation resource limits can be added. Let's say L1 for step 1 (proving A_i() == A()), and L2 for step 3 (finding the action). Then, the system would use, say, L1/N for each of the A_i-s (if a proof is found sooner, other proofs can use more resources, or the system may just loop aimlessly if for some reason all resources must be spent).

For the step 3, proofs are not actually needed. If L2 is sufficiently large, UX("return p") can be simply evaluated for all p.

Comment author: 15 March 2012 08:39:30PM 0 points [-]

Is free will inconsistent with knowing the answer after? For example, if you choose to two box, and I watch you, I will know you chose to two box. Does this violate your free will? If not, why should it matter when I'm standing while I know this?

Comment author: 15 March 2012 08:47:17PM 0 points [-]

Your knowledge about my decision, before or after, is not relevant to "free will" as I mean it here. Free will exists in my mind. It is my lack of knowledge, before the decision, of what my decision will be.

If I can prove that I will decide X before actually deciding X, then I don't have free will in this sense.

Comment author: 14 March 2012 06:19:20PM 0 points [-]

Also, thanks for writing this up! I understand your argument, but I think it might be helpful to others if you wrote out the expected-utility calculation explicitly.

Comment author: 14 March 2012 09:25:23AM -1 points [-]

If you have just outsmarted the Omega and collected \$1001000 - well, it was not the Omega, it was just an impostor or something.

You must always consider the possibility that this is not a real Omega. Be in the Newcomb case, in counterfactual mugging or anywhere else.