I don't see why this is outside of UDT's domain. It seems straightforward to model and solve the decision problem in UDT1. Here's the world program:
def P(color):
outcome = "die"
if Omega_Predict(S, "you're wrong") == color:
if S("") == color:
outcome = "live"
else:
if S("you're wrong") == color:
outcome = "live"
Assuming a preference to maximize the occurrence of outcome="live" averaged over P("green") and P("red"), UDT1 would conclude that the optimal S returns a constant, either "green" or "red", and do that.
BTW, do you find this "world program" style analysis useful? I don't want to over-do them and get people annoyed. (I refrained from doing this for the problem described in Gary's post, since it doesn't mention UDT at all, and therefore I'm assuming you want to find a TDT-only solution.)
The world program I would use to model this scenario is:
def P(color):
if Omega_Predict(S, "you're wrong") == color:
outcome = "die"
else:
outcome = "live"
The else branch seems unreachable, given color = S("your'e wrong) and the usual assumptions about Omega.
I don't understand what your nested if statements are modeling.
According to Ingredients of Timeless Decision Theory, when you set up a factored causal graph for TDT, "You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation", where "the logical computation" refers to the TDT-prescribed argmax computation (call it C) that takes all your observations of the world (from which you can construct the factored causal graph) as input, and outputs an action in the present situation.
I asked Eliezer to clarify what it means for another logical computation D to be either the same as C, or "dependent on" C, for purposes of the TDT algorithm. Eliezer answered:
I replied as follows (which Eliezer suggested I post here).
If that's what TDT means by the logical dependency between Platonic computations, then TDT may have a serious flaw.
Consider the following version of the transparent-boxes scenario. The predictor has an infallible simulator D that predicts whether I one-box here [EDIT: if I see $1M]. The predictor also has a module E that computes whether the ith digit of pi is zero, for some ridiculously large value of i that the predictor randomly selects. I'll be told the value of i, but the best I can do is assign an a priori probability of .1 that the specified digit is zero.