Cross Posted on By Way of Contradiction
You have probably heard the argument in favor of functional programming languages that functions act like functions in mathematics, and therefore have no side effects. When you call a function, you get an output, and with the exception of possibly the running time nothing matters except for the output that you get. This is in contrast with other programming languages where a function might change the value of some other global variable and have a lasting effect.
Unfortunately the truth is not that simple. All functions can have side effects. Let me illustrate this with Newcomb’s problem. In front of you are two boxes. The first box contains 1000 dollars, while the second box contains either 1,000,000 or nothing. You may choose to take either both boxes or just the second box. An Artificial Intelligence, Omega, can predict your actions with high accuracy, and has put 1,000,000 in the second box if and only if he predicts that you will take only the second box.
You, being a good reflexive decision agent, take only the second box, and it contains 1,000,000.
Omega can be viewed as a single function in a functional programming language, which takes in all sorts of information about you and the universe, and outputs a single number, 1,000,000 or 0. This function has a side effect. The side effect is that you take only the second box. If Omega did not simulate you and just output 1,000,000, and you knew this, then you would take two boxes.
Perhaps you are thinking “No, I took one box because I BELIEVED I was being simulated. This was not a side effect of the function, but instead a side effect of my beliefs about the function. That doesn’t count.”
Or, perhaps you are thinking “No, I took one box because of the function from my actions to states of the box. The side effect is no way dependent on the interior workings of Omega, but only on the output of Omega’s function in counterfactual universes. Omega’s code does not matter. All that matters is the mathematical function from the input to the output.”
These are reasonable rebuttals, but they do not carry over to other situations.
Imagine two programs, Omega 1 and Omega 2. They both simulate you for an hour, then output 0. The only difference is that Omega 1 tortures the simulation of you for an hour, while Omega 2 tries its best to simulate the values of the simulation of you. Which of these functions would your rather be run.
The fact that you have a preference between these (assuming you do have a preference) shows that function has a side effect that is not just a consequence of the function application in counterfactual universes.
Further, notice that even if you never know which function is run, you still have a preference. It is possible to have preference over things that you do not know about. Therefore, this side effect is not just a function of your beliefs about Omega.
Sometimes the input-output model of computation is an over simplification.
Let’s look at an application of thinking about side effects to Wei Dai’s Updateless Decision Theory. I will not try to explain UDT if you don’t already know about it, so this post should not be viewed alone.
UDT 1.0 is an attempt at a reflexive decision theory. It views a decision agent as a machine with code S, given input X, and having to choose an output Y. It advises the agent to consider different possible outputs, Y, and consider all consequences of the fact that the code S when run on X outputs Y. It then outputs the Y which maximizes his perceived utility of all the perceived consequences.
Wei Dai noticed an error with UDT 1.0 with the following thought experiment:
“Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0.”
The problem is that all the reasons that S(1)=A are the exact same reasons why S(2)=A, so the two copies will probably the same result. Wei Dai proposes a fix, UDT 1.1 which is that instead of choosing an output S(1), you instead choose a function S, from 1,2 to A,B from the 4 available functions which maximizes utility. I think this was not the correct correction, which I will probably talk about in the future. I prefer UDT 1.0 to UDT 1.1.
Instead, I would like to offer an alternative way of looking at this thought experiment. The error is in the fact that S only looked at the outputs, and ignored possible side effects. I am aware that when S looked at the outputs, he was also considering his output in simulations of himself, but those are not side effects of the function. Those are direct results of the output of the function.
We should look at this problem and think, ”I want to output A or B, but in such a way that has the side effect that the other copy of me outputs B or A respectively.” S could search through functions considering their output on input 1 and the side effects of that function. S might decide to run the UDT 1.1 algorithm, which would have the desired result.
The difference between this and UDT 1.1 is that in UDT 1.1 S(1) is acting as though it had complete control over the output of S(2). In this thought experiment that seems like a fair assumption, but I do not think it is a fair assumption in general, so I am trying to construct a decision theory which does not have to make this assumption. This is because if the problem was different, then S(1) and S(2) might have had different utility functions.
This seems very similar to what I named "UDT2" on the decision theory mailing list. Here's how I described it:
So aside from the unfortunately terminology, I think you're probably going in the right direction.
I never really got why UDT2 wasn't just a special case of UDT1, in which the set of outputs was restricted to outputs of the form "Turn into program T at time t". (This was Vladimir Nesov's immediate response on the mailing list.) I suppose that there should also be a corresponding "UDT2.1", in which the agent instead chooses among all input-output maps mapping inputs to outputs of the form "Turn into program T at time t".