IlyaShpitser comments on A problem with Timeless Decision Theory (TDT) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (127)
And this was my reply:
This is an unfinished part of the theory that I've also thought about, though your example puts it very crisply (you might consider posting it to LW?)
My current thoughts on resolution tend to see two main avenues:
1) Construct a full-blown DAG of math and Platonic facts, an account of which mathematical facts make other mathematical facts true, so that we can compute mathematical counterfactuals.
2) Treat differently mathematical knowledge that we learn by genuinely mathematical reasoning and by physical observation. In this case we know (D xor E) not by mathematical reasoning, but by physically observing a box whose state we believe to be correlated with D xor E. This may justify constructing a causal DAG with a node descending from D and E, so a counterfactual setting of D won't affect the setting of E.
Currently I'd say that (2) looks like the better avenue. Can you come up with an improper mathematical dependency where E is inferred from D, and shouldn't be seen as counterfactually affected, based on mathematical reasoning only without postulating the observation of a physical variable that descends from both E and D?
Incidentally, note that an unsolvable problem that should stay unsolvable is as follows: I'm asked to pick red or green, and told "A simulation of you given this information as well picked the wrong color and got shot." Whichever choice I make, I deduce that the other choice was better. The exact details here will depend on how I believe the simulator chose to tell me this, but ceteris paribus it's an unsolvable problem.
I don't see how logical entailment acts as functional causal dependence in Pearl's account of causation. Can you explain?
Pearl's account doesn't include logical uncertainty at all so far as I know, but I made my case here
http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/
that Pearl's account has to be modified to include logical uncertainty on purely epistemic grounds, never mind decision theory.
If this isn't what you're asking about then please further clarify the question?
Treating same inputs on duplicate functions also arises in the treatment of counterfactuals (since one duplicates the causal graph across worlds of interest). The treatment I am familiar with is systematic merges of portions of the counterfactual graph which can be proved to be the same. I don't really understand why this issue is about logic (rather than about duplication).
What was confusing me, however, was the remark that it is possible to create causal graphs of mathematical facts (presumably with entailment functioning as a causal relationship between facts). I really don't see how this can be done. In particular the result is highly cyclic, infinite for most interesting theories, and it is not clear how to define interventions on such graphs in a satisfactory way.