According to Ingredients of Timeless Decision Theory, when you set up a factored causal graph for TDT, "You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation", where "the logical computation" refers to the TDT-prescribed argmax computation (call it C) that takes all your observations of the world (from which you can construct the factored causal graph) as input, and outputs an action in the present situation.
I asked Eliezer to clarify what it means for another logical computation D to be either the same as C, or "dependent on" C, for purposes of the TDT algorithm. Eliezer answered:
For D to depend on C means that if C has various logical outputs, we can infer new logical facts about D's logical output in at least some cases, relative to our current state of non-omniscient logical knowledge. A nice form of this is when supposing that C has a given exact logical output (not yet known to be impossible) enables us to infer D's exact logical output, and this is true for every possible logical output of C. Non-nice forms would be harder to handle in the decision theory but we might perhaps fall back on probability distributions over D.
I replied as follows (which Eliezer suggested I post here).
If that's what TDT means by the logical dependency between Platonic computations, then TDT may have a serious flaw.
Consider the following version of the transparent-boxes scenario. The predictor has an infallible simulator D that predicts whether I one-box here [EDIT: if I see $1M]. The predictor also has a module E that computes whether the ith digit of pi is zero, for some ridiculously large value of i that the predictor randomly selects. I'll be told the value of i, but the best I can do is assign an a priori probability of .1 that the specified digit is zero.
...reasoning under logical uncertainty using limited computing power... is another huge unsolved open problem of AI. Human mathematicians had this whole elaborate way of believing that the Taniyama Conjecture implied Fermat's Last Theorem at a time when they didn't know whether the Taniyama Conjecture was true or false; and we seem to treat this sort of implication in a rather different way than '2=1 implies FLT', even though the material implication is equally valid.
And this was my reply:
This is an unfinished part of the theory that I've also thought about, though your example puts it very crisply (you might consider posting it to LW?)
My current thoughts on resolution tend to see two main avenues:
1) Construct a full-blown DAG of math and Platonic facts, an account of which mathematical facts make other mathematical facts true, so that we can compute mathematical counterfactuals.
2) Treat differently mathematical knowledge that we learn by genuinely mathematical reasoning and by physical observation. In this case we know (D xor E) not by mathematical reasoning, but by physically observing a box whose state we believe to be correlated with D xor E. This may justify constructing a causal DAG with a node descending from D and E, so a counterfactual setting of D won't affect the setting of E.
Currently I'd say that (2) looks like the better avenue. Can you come up with an improper mathematical dependency where E is inferred from D, and shouldn't be seen as counterfactually affected, based on mathematical reasoning only without postulating the observation of a physical variable that descends from both E and D?
Incidentally, note that an unsolvable problem that should stay unsolvable is as follows: I'm asked to pick red or green, and told "A simulation of you given this information as well picked the wrong color and got shot." Whichever choice I make, I deduce that the other choice was better. The exact details here will depend on how I believe the simulator chose to tell me this, but ceteris paribus it's an unsolvable problem.
Perhaps I'm misunderstanding you here, but D and E are Platonic computations. What does it mean to construct a causal DAG among Platonic comput... (read more)