First, to clear up a possible confusion, the S in my P is not supposed to be a variable. It's a constant, more specifically a piece of code that implements UDT1 itself. (If I sometimes talk about it as if it's a variable, that's because I'm trying to informally describe what is going on inside the computation that UDT1 does.)
For the more general question of how do we know the structure of the world program, the idea is that for an actual AI, we would program it to care about all possible world programs (or more generally, mathematical structures, see example 3 in my UDT1 post, but also Nesov's recent post for a critique). The implementation of UDT1 in the AI would then figure out which world programs it's in by looking at its inputs (which would contain all of the AI's memories and sensory data) and checking which world programs call it with those inputs.
For these sample problems, the assumption is that somehow Omega has previously provided us with enough evidence for us to trust its word on what the structure of the current problem is. So in the actual P, 'S(i, "box contains $1M")' is really something like 'S(memories, omegas_explanations_about_this_problem, i, "box contains $1M")' and these additional inputs allow S to conclude that it's being invoked inside this P, and not some other world program.
(An additional subtlety here is that if we consider all possible world programs, there are bound to be some other world programs where S is being called with these exact same inputs, for example ones where S is being instantiated inside a Boltzmann brain, but presumably those worlds/regions have very low weights, meaning that the AI doesn't care much about them.)
Let me know if that answers your questions/concerns. I didn't answer you point by point because I'm not sure which questions/concerns remain after you see my general answers. Feel free to repeat anything you still want me to answer.
First, to clear up a possible confusion, the S in my P is not supposed to be a variable. It's a constant, more specifically a piece of code that implements UDT1 itself. (If I sometimes talk about it as if it's a variable, that's because I'm trying to informally describe what is going on inside the computation that UDT1 does.)
Then it should be S(P), because S can't make any decisions without getting to read the problem description.
According to Ingredients of Timeless Decision Theory, when you set up a factored causal graph for TDT, "You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation", where "the logical computation" refers to the TDT-prescribed argmax computation (call it C) that takes all your observations of the world (from which you can construct the factored causal graph) as input, and outputs an action in the present situation.
I asked Eliezer to clarify what it means for another logical computation D to be either the same as C, or "dependent on" C, for purposes of the TDT algorithm. Eliezer answered:
I replied as follows (which Eliezer suggested I post here).
If that's what TDT means by the logical dependency between Platonic computations, then TDT may have a serious flaw.
Consider the following version of the transparent-boxes scenario. The predictor has an infallible simulator D that predicts whether I one-box here [EDIT: if I see $1M]. The predictor also has a module E that computes whether the ith digit of pi is zero, for some ridiculously large value of i that the predictor randomly selects. I'll be told the value of i, but the best I can do is assign an a priori probability of .1 that the specified digit is zero.