If you have a formalization of CDT in the constant programs framework, could you spell it out in more detail?
For CDT, it is not useful to try simulating every single piece of code within U(), since there is no Correlator to use the results. So the simplest version of the CDT agent would be:
Let L = available computation resources / number of functions in U().
Evaluate U(), spending at most L steps in each function. If a function takes too long to evaluate, assume its result is a new variable x_i and continue to evaluate its parent.
In the end, the final result should be: U() = some function of variables x_i, one of which is the variable corresponding to Self().
Return the argmax of that function (assuming some distribution over non-self variables, if necessary).
After some thinking, I came upon an idea how to define the difference between CDT and UDT within the constant programs framework. I would post it as a comment, but it is rather long...
The idea is to separate the cognitive part of an agent into three separate modules:
1. Simulator: given the code for a parameterless function X(), the Simulator tries to evaluate it, spending L computation steps. The result is either proving that X()=x for some value x, or leaving X() unknown.
2. Correlator: given the code for two functions X(...) and Y(...), the Correlator checks for proofs (of length up to P) of structural similarity between the source codes of the functions, trying to prove correlations X(...)=Y(...).
[Note: the Simulator and the Correlator can use the results of each other, so that:
If simulator proves that A()=x, then correlator can prove that A()+B() = x+B()
If correlator proves that A()=B(), then simulator can skip simulation when proving that (A()==B() ? 1 : 2) = 1]
3. Executive: allocates tasks and resources to Simulator and Correlator in some systematic manner, trying to get them to prove the "moral arguments"
Self()=x => U()=u
or
( Self()=x => U()=ux AND Self()=y => U()=uy ) => ux > uy,
and returns the best found action.
Now, CDT can be defined as an agent with the Simulator, but without the Correlator. Then, no matter what L it is given, the Simulator won't be able to prove that Self()=Self(), because of the infinite regress. So the agent will be opaque to itself, and will two-box on Newcomb's problem and defect against itself in Prisoner's Dilemma.
The UDT/ADT, on the other hand, have functioning Correlators.
If it is possible to explicitly (rather than conceptually) separate an agent into the three parts, then it appears to be possible to demonstrate the good behavior of an agent in the ASP problem. The world can be written as a NewComb's-like function:
def U():
box2 = 1000
box1 = (P()==1 ? 1000000 : 0)
return (A()==1 ? box1 : box1 + box2)
where P is a predictor that has much less computational resources than the agent A. We can assume that the predictor has basically the same source code as the agent, except for the bound on L and a stipulation that P two-boxes if it cannot prove one-boxing using available resources.
Then, if the Executive uses a reasonable strategy ("start with low values of L and P, then increase them until all necessary moral arguments are found or all available resources are spent"), then the Correlator should be able to prove A()=P() quite early in the process.