thomblake comments on A problem with Timeless Decision Theory (TDT) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (127)
Logical uncertainty has always been more difficult to deal with than physical uncertainty; the problem with logical uncertainty is that if you analyze it enough, it goes away. I've never seen any really good treatment of logical uncertainty.
But if we depart from TDT for a moment, then it does seem clear that we need to have causelike nodes corresponding to logical uncertainty in a DAG which describes our probability distribution. There is no other way you can completely observe the state of a calculator sent to Mars and a calculator sent to Venus, and yet remain uncertain of their outcomes yet believe the outcomes are correlated. And if you talk about error-prone calculators, two of which say 17 and one of which says 18, and you deduce that the "Platonic answer" was probably in fact 17, you can see that logical uncertainty behaves in an even more causelike way than this.
So, going back to TDT, my hope is that there's a neat set of rules for factoring our logical uncertainty in our causal beliefs, and that these same rules also resolve the sort of situation that you describe.
If you consider the notion of the correlated error-prone calculators, two returning 17 and one returning 18, then the most intuitive way to handle this would be to see a "Platonic answer" as its own causal node, and the calculators as error-prone descendants. I'm pretty sure this is how my brain is drawing the graph, but I'm not sure it's the correct answer; it seems to me that a more principled answer would involve uncertainty about which mathematical fact affects each calculator - physically uncertain gates which determine which calculation affects each calculator.
For the (D xor E) problem, we know the behavior we want the TDT calculation to exhibit; we want (D xor E) to be a descendant node of D and E. If we view the physical observation of $1m as telling us the raw mathematical fact (D xor E), and then perform mathematical inference on D, we'll find that we can affect E, which is not what we want. Conversely if we view D as having a physical effect, and E as having a physical effect, and the node D xor E as a physical descendant of D and E, we'll get the behavior we want. So the question is whether there's any principled way of setting this up which will yield the second behavior rather than the first, and also, presumably, yield epistemically correct behavior when reasoning about calculators and so on.
That's if we go down avenue (2). If we go down avenue (1), then we give primacy to our intuition that if-counterfactually you make a different decision, this logically controls the mathematical fact (D xor E) with E held constant, but does not logically control E with (D xor E) held constant. While this does sound intuitive in a sense, it isn't quite nailed down - after all, D is ultimately just as constant as E and (D xor E), and to change any of them makes the model equally inconsistent.
These sorts of issues are something I'm still thinking through, as I think I've mentioned, so let me think out loud for a bit.
In order to observe anything that you think has already been controlled by your decision - any physical thing in which a copy of D has already played a role - then (leaving aside the question of Omega's strategy that simulated alternate versions of you to select a self-consistent problem, and whether this introduces conditional-strategy-dependence rather than just decision-dependence into the problem) there have to be other physical facts which combine with D to yield our observation.
Some of these physical facts may themselves be affected by mathematical facts, like an implemented computation of E; but the point is that in order to have observed anything controlled by D, we already had to draw a physical, causal diagram in which other nodes descended from D.
So suppose we introduce the rule that in every case like this, we will have some physical node that is affected by D, and if we can observe info that depends on D in any way, we'll view the other mathematical facts as combining with D's physical node. This is a rule that tells us not to draw the diagram with a physical node being determined by the mathematical fact D xor E, but rather to have a physical node determined by D, and then a physical descendent D xor E. (Which in this particular problem should descend from a physical node E that descends from the mathematical fact E, because the mathematical fact E is correlated with our uncertainty about other things, and a factored causal graph should have no remaining correlated sources of background uncertainty; but if E didn't correlate to anything else in particular, we could just have D descending to (D xor E) via the (xor with E) rule.)
When I evaluate this proposed solution for ad-hoc-ness, it does admittedly look a bit ad-hoc, but it solves at least one other problem than the one I started with, and which I didn't think of until now. Suppose Omega tells me that I make the same decision in the Prisoner's Dilemma as Agent X. This does not necessarily imply that I should cooperate with Agent X. X and I could have made the same decision for different (uncorrelated) reasons, and Omega could have simply found out by simulating the two of us that X and I gave the same response. In this case, presumably defecting; but if I cooperated, X wouldn't do anything differently. X is just a piece of paper with "Defect" written on it.
If I draw a causal diagram of how I came to learn this correlation from Omega, and I follow the rule of always drawing a causal boundary around the mathematical fact D as soon as it physically affects something, then, given the way Omega simulated both of us to observe the correlation, I see that D and X separately physically affected the correlation-checker node.
On the other hand, if I can analyze the two pieces of code D and X and see that they return the same output, without yet knowing the output, then this knowledge was obtained in a way that doesn't involve D producing an output, so I don't have to draw a hard causal boundary around that output.
If this works, the underlying principle that makes it work is something along the lines of "for D to control X, the correlation between our uncertainty about D and X has to emerge in a way that doesn't involve anyone already computing D". Otherwise D has no free will (said firmly tongue-in-cheek). I am not sure that this principle has any more elegant expression than the rule, "whenever, in your physical model of the universe, D finishes computing, draw a physical/causal boundary around that finished computation and have other things physically/causally descend from it".
If this principle is violated then D ends up "correlated" to all sorts of other things we observe, like the price of fish and whether it's raining outside, via the magic of xor.
When you use terms like "draw a hard causal boundary" I'm forced to imagine you're actually drawing these things on the back of a cocktail napkin somewhere using some sorts of standard symbols. Are there such standards, and do you have such diagrams scanned in online somewhere?
ETA: A note for future readers: Eliezer below is referring to Judea Pearl (simply "Pearl" doesn't convey much via google-searching, though I suppose "pearl causality" does at the moment)
Read Pearl. I think his online intros should give you a good idea of what the cocktail napkin looks like.
Hmm... Pearl uses a lot of diagrams but they all seem pretty ad-hoc. Just the sorts of arrows and dots and things that you'd use to represent any graph (in the mathematics sense). Should I infer from this description that the answer is, "No, there isn't a standard"?
I was picturing something like a legend that would tell someone, "Use a dashed line for a causal boundary, and a red dotted line to represent a logical inference, and a pink squirrel to represent postmodernism"
Um... I'm not sure there's much I can say to that beyond "Read Probabilistic Reasoning in Intelligent Systems, or Causality".
Pearl's system is not ad-hoc. It is very not ad-hoc. It has a metric fuckload of math backing up the simple rules. But Pearl's system does not include logical uncertainty. I'm trying to put logical uncertainty into it, while obeying the rules. This is a work in progress.
Thomblake's observation may be that while Pearl's system is extremely rigorous the diagrams used do not give an authoritative standard style for diagram drawing.
That's correct - I was looking for a standard style for diagram drawing.
I'd just like to register a general approval of specifying that one's imaginary units are metric.
FWIW