Sorry, I was sort of asking a general question and putting it in the terms of this particular problem at the same time. I should have been clearer.
What I meant was "I like TDC, but I think it's insufficient, it doesn't seem to easily deal with the fact that the physical implementation of the abstract computation can potentially end up having other things happen that result in something OTHER than what the ideal platonic would say should happen"
I think though that my initial suggestion might not have been the right solution. Instead, maybe invert it, say "actual initial state of hardware/software/etc" feeds into "selector that selects a platonic algorithm" which then feeds into "output"... then, depending on how you want to look at it, have other external stuff, radiation, damage to hardware occurring mid computation, etc etc etc have causal inputs into those last two nodes. My initial thought would be the second to last node.
The idea here being that such errors change which platonic computation actually occurred.
Then you can say stuff in terms in decisions being choosing "what does the abstract computation that I am at this moment output?", with the caveat of "but I'm not absolutely certain that I am computing the specific algorithm that I think I am"... so that is where one could place the uncertainty that arises from hardware bugaboos, etc etc. (Also, logical uncertainty perhaps about if your code actually implements algorithms that you think it does, if that's relevant.)
I'm still having trouble seeing what troubles you. Yes, the physical hardware might mess up the attempt to implement the Platonic algorithm. So, there's a probability of Omega guessing wrong, but if Omega picks your most likely action, it will still better approximate it by just using the platonic algorithm instead of the platonic algorithm plus noise.
Also, as Eliezer_Yudkowsky keeps pointing out, you don't want an agent that computes "what does the abstract computation that I am at this moment output?" because whatever it picks, it's correct.
...
(This is the third post in a planned sequence.)
My last post left us with the questions:
Today, I’ll take an initial swing at these questions. I’ll review Judea Pearl’s causal Bayes nets; show how Bayes nets offer a general methodology for computing counterfactual “would”s; and note three plausible alternatives for how to use Pearl’s Bayes nets to set up a CSA. One of these alternatives will be the “timeless” counterfactuals of Eliezer’s Timeless Decision Theory.
The problem of counterfactuals is the problem what we do and should mean when we we discuss what “would” have happened, “if” something impossible had happened. In its general form, this problem has proved to be quite gnarly. It has been bothering philosophers of science for at least 57 years, since the publication of Nelson Goodman’s book “Fact, Fiction, and Forecast” in 1952:
Recall that we seem to need counterfactuals in order to build agents that do useful decision theory -- we need to build agents that can think about the consequences of each of their “possible actions”, and can choose the action with best expected-consequences. So we need to know how to compute those counterfactuals. As Goodman puts it, “[t]he analysis of counterfactual conditionals is no fussy little grammatical exercise.”
Judea Pearl’s Bayes nets offer a method for computing counterfactuals. As noted, it is hard to reduce human counterfactuals in general: it is hard to build an algorithm that explains what (humans will say) really “would” have happened, “if” an impossible event had occurred. But it is easier to construct specific formalisms within which counterfactuals have well-specified meanings. Judea Pearl’s causal Bayes nets offer perhaps the best such formalism.
Pearl’s idea is to model the world as based on some set of causal variables, which may be observed or unobserved. In Pearl’s model, each variable is determined by a conditional probability distribution on the state of its parents (or by a simple probability distribution, if it has no parents). For example, in the following Bayes net, the beach’s probability of being “Sunny” depends only on the “Season”, and the probability that there is each particular “Number of beach-goers” depends only on the “Day of the week” and on the “Sunniness”. Since the “Season” and the “Day of the week” have no parents, they simply have fixed probability distributions.
Once we have a Bayes net set up to model a given domain, computing counterfactuals is easy*. We just:
For example, suppose I want to evaluate the truth of: “If last Wednesday had been sunny, there would have been more beach-goers”. I leave the “Day of the week” node at Wednesday“, set the ”Sunny?“ node to ”Sunny“, ignore the “Season” node, since it is the parent of a surgically set node, and compute the probability distribution on beach-goers.
*Okay, not quite easy: I’m sweeping under the carpet the conversion from the English counterfactual to the list of variables to surgically alter, in step 2. Still, Pearl’s Bayes nets do much of the work.
But, even if we decide to use Pearl’s method, we are left with the choice of how to represent the agent's "possible choices" using a Bayes net. More specifically, we are left with the choice of what surgeries to execute, when we represent the alternative actions the agent “could” take.
There are at least three plausible alternatives:
Alternative One: “Actions CSAs”:
Here, we model the outside world however we like, but have the agent’s own “action” -- its choice of a_1, a_2, or ... , a_n -- be the critical “choice” node in the causal graph. For example, we might show Newcomb’s problem as follows:
The assumption built into this set-up is that the agent’s action is uncorrelated with the other nodes in the network. For example, if we want to program an understanding of Newcomb’s problem into an Actions CSA, we are forced to choose a probability distribution over Omega’s prediction that is independent of the agent’s actual choice.
How Actions CSAs reckon their coulds and woulds:
So, if causal decision theory is what I think it is, an “actions CSA” is simply a causal decision theorist. Also, Actions CSAs will two-box on Newcomb’s problem, since, in their network, the contents of box B is independent of their choice to take box A.
Alternative Two: “Innards CSAs”:
Here, we again model the outside world however we like, but we this time have the agent’s own “innards” -- the physical circuitry that interposes between the agent’s sense-inputs and its action-outputs -- be the critical “choice” node in the causal graph. For example, we might show Newcomb’s problem as follows:
Here, the agent’s innards are allowed to cause both the agent’s actions and outside events -- to, for example, we can represent Omega’s prediction as correlated with the agent’s action.
How Innards CSAs reckon their coulds and woulds:
Innards CSAs will one-box on Newcomb’s problem, because they reason that if their innards were such as to make them one-box, those same innards would cause Omega, after scanning their brain, to put the $1M in box B. And so they “choose” innards of a sort that one-boxes on Newcomb’s problem, and they one-box accordingly.
Alternative Three: “Timeless” or “Algorithm-Output” CSAs:
In this alternative, as Eliezer suggested in Ingredients of Timeless Decision Theory, we have a “Platonic mathematical computation” as one of the nodes in our causal graph, which gives rise at once to our agent’s decision, to the beliefs of accurate predictors about our agent’s decision, and to the decision of similar agents in similar circumstances. It is the output to this mathematical function that our CSA uses as the critical “choice” node in its causal graph. For example:
How Timeless CSAs reckon their coulds and woulds:
Like innards CSAs, algorithm-output CSAs will one-box on Newcomb’s problem, because they reason that if the output of their algorithm was such as to make them one-box, that same algorithm-output would also cause Omega, simulating them, to believe they will one-box and so to put $1M in box B. They therefore “choose” to have their algorithm output “one-box on Newcomb’s problem!”, and they one-box accordingly.
Unlike innards CSAs, algorithm-output CSAs will also Cooperate in single-shot prisoner’s dilemmas against Clippy -- in cases where they think it sufficiently likely that Clippy’s actions are output by an instantiation of “their same algorithm” -- even in cases where Clippy cannot at all scan their brain, and where their innards play no physically causal role in Clippy’s decision. (An Innards CSA, by contrast, will Cooperate if having Cooperating-type innards will physically cause Clippy to cooperate, and not otherwise.)
Coming up: considerations as to the circumstances under which each of the above types of agents will be useful, under different senses of “useful”.
Thanks again to Z M Davis for the diagrams.