By orthonormal's suggestion, I take this out of comments.
Consider a CDT agent making a decision in a Newcomb's problem, in which Omega is known to make predictions by perfectly simulating the players. Assume further that the agent is capable of anthropic reasoning about simulations. Then, while making its decision, the agent will be uncertain about whether it is in the real world or in Omega's simulation, since the world would look the same to it either way.
The resulting problem has a structural similarity to the Absentminded driver problem1. Like in that problem, directly assigning probabilities to each of the two possibilities is incorrect. The planning-optimal decision, however, is readily available to CDT, and it is, naturally, to one-box.
Objection 1. This argument requires that Omega is known to make predictions by simulation, which is not necessarily the case.
Answer: It appears to be sufficient that the agent only knows that Omega is always correct. If this is the case, then a simulating-Omega and some-other-method-Omega are indistinguishable, so the agent can freely assume simulation.
[This is a rather shaky reasoning, I'm not sure it is correct in general. However, I hypothesise that whatever method Omega uses, if the CDT agent knows the method, it will one-box. It is only a "magical Omega" that throws CDT off.]
Objection 2. The argument does not work for the problems where Omega is not always correct, but correct with, say, 90% probability.
Answer: Such problems are underspecified, because it is unclear how the probability is calculated. [For example, Omega that always predicts "two-box" will be correct in 90% cases if 90% of agents in the population are two-boxers.] A "natural" way to complete the problem definition is to stipulate that there is no correlation between correctness of Omega's predictions and any property of the players. But this is equivalent to Omega first making a perfectly correct prediction, and then adding a 10% random noise. In this case, the CDT agent is again free to consider Omega a perfect simulator (with added noise), which again leads to one-boxing.
Objection 3. In order for the CDT agent to one-box, it needs a special "non-self-centered" utility function, which when inside the simulation would value things outside.
Answer: The agent in the simulation has exactly the same experiences as the agent outside, so it is the same self, so it values the Omega-offered utilons the same. This seems to be a general consequence of reasoning about simulations. Of course, it is possible to give the agent a special irrational simulation-fearing utility, but what would be the purpose?
Objection 4. CDT still won't cooperate in the Prisoner's Dilemma against a CDT agent with an orthogonal utility function.
Answer: damn.
1 Thanks to Will_Newsome for pointing me to this.
I wrote that comment before I saw ASP, but let's try:
def U():
box1 = 1000
box2 = (A() == 2) ? 0 : 1000000
return box2 + ((A_2() == 2) ? box1 : 0)
def A():
Forall i try to prove A()==A_i()
Let UX(X) be the function that results from the source of U() if all calls "A_i()", for which the system was able to prove A()==A_i() were changed to "eval(X)".
For some action p, prove that for any possible action q, UX("return q") <= UX("return p").
Return p.
Assume A_2() is a weak predictor, and A() is able to simulate it and know its prediction. Nevertheless, A() is still able to prove A()==A_2(), without getting a contradiction (which a regular UDT agent would get from the "playing chicken" rule). So the agent will prove that it will "one-box", and the weak predictor will prove the same.
Interesting.
1) I can't really parse it because the first step mentions A_i. What if the problem statement is slightly obfuscated and doesn't contain any easily recognizable A_i? Can you rephrase your proposal in terms of trying to prove things about U and A?
2) Does your implementation of A use some proof length limit on the first step? On the second step? Or does it somehow stop early when the first proof is found?