Cross Posted on By Way of Contradiction

You have probably heard the argument in favor of functional programming languages that functions act like functions in mathematics, and therefore have no side effects. When you call a function, you get an output, and with the exception of possibly the running time nothing matters except for the output that you get. This is in contrast with other programming languages where a function might change the value of some other global variable and have a lasting effect.

Unfortunately the truth is not that simple. All functions can have side effects. Let me illustrate this with Newcomb’s problem. In front of you are two boxes. The first box contains 1000 dollars, while the second box contains either 1,000,000 or nothing. You may choose to take either both boxes or just the second box. An Artificial Intelligence, Omega, can predict your actions with high accuracy, and has put 1,000,000 in the second box if and only if he predicts that you will take only the second box.

You, being a good reflexive decision agent, take only the second box, and it contains 1,000,000.

Omega can be viewed as a single function in a functional programming language, which takes in all sorts of information about you and the universe, and outputs a single number, 1,000,000 or 0. This function has a side effect. The side effect is that you take only the second box. If Omega did not simulate you and just output 1,000,000, and you knew this, then you would take two boxes.

Perhaps you are thinking “No, I took one box because I BELIEVED I was being simulated. This was not a side effect of the function, but instead a side effect of my beliefs about the function. That doesn’t count.”

Or, perhaps you are thinking “No, I took one box because of the function from my actions to states of the box. The side effect is no way dependent on the interior workings of Omega, but only on the output of Omega’s function in counterfactual universes. Omega’s code does not matter. All that matters is the mathematical function from the input to the output.

These are reasonable rebuttals, but they do not carry over to other situations.

Imagine two programs, Omega 1 and Omega 2. They both simulate you for an hour, then output 0. The only difference is that Omega 1 tortures the simulation of you for an hour, while Omega 2 tries its best to simulate the values of the simulation of you. Which of these functions would your rather be run.

The fact that you have a preference between these (assuming you do have a preference) shows that function has a side effect that is not just a consequence of the function application in counterfactual universes.

Further, notice that even if you never know which function is run, you still have a preference. It is possible to have preference over things that you do not know about. Therefore, this side effect is not just a function of your beliefs about Omega.

Sometimes the input-output model of computation is an over simplification.

Let’s look at an application of thinking about side effects to Wei Dai’s Updateless Decision Theory. I will not try to explain UDT if you don’t already know about it, so this post should not be viewed alone.

UDT 1.0 is an attempt at a reflexive decision theory. It views a decision agent as a machine with code S, given input X, and having to choose an output Y. It advises the agent to consider different possible outputs, Y, and consider all consequences of the fact that the code S when run on X outputs Y. It then outputs the Y which maximizes his perceived utility of all the perceived consequences.

Wei Dai noticed an error with UDT 1.0 with the following thought experiment:

“Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0.”

The problem is that all the reasons that S(1)=A are the exact same reasons why S(2)=A, so the two copies will probably the same result. Wei Dai proposes a fix, UDT 1.1 which is that instead of choosing an output S(1), you instead choose a function S, from 1,2 to A,B from the 4 available functions which maximizes utility. I think this was not the correct correction, which I will probably talk about in the future. I prefer UDT 1.0 to UDT 1.1.

Instead, I would like to offer an alternative way of looking at this thought experiment. The error is in the fact that S only looked at the outputs, and ignored possible side effects. I am aware that when S looked at the outputs, he was also considering his output in simulations of himself, but those are not side effects of the function. Those are direct results of the output of the function.

We should look at this problem and think, ”I want to output A or B, but in such a way that has the side effect that the other copy of me outputs B or A respectively.” S could search through functions considering their output on input 1 and the side effects of that function. S might decide to run the UDT 1.1 algorithm, which would have the desired result.

The difference between this and UDT 1.1 is that in UDT 1.1 S(1) is acting as though it had complete control over the output of S(2). In this thought experiment that seems like a fair assumption, but I do not think it is a fair assumption in general, so I am trying to construct a decision theory which does not have to make this assumption. This is because if the problem was different, then S(1) and S(2) might have had different utility functions.

New Comment
18 comments, sorted by Click to highlight new comments since:

This seems to be overloading the term "side effects". The functional programming concept of side effects (which it says its functions shouldn't have) is changing the global state of the program that invokes them other than by returning the value. It makes no claims of these other concepts of a program being affected by analyzing the source code of the function independent of invoking it or of the the function running on morally relevant causal structure.

Yes, perhaps I should not have called it that, but the two concepts seem very similar to me. While the things I talk about do not fit in the definition of side effect from functional languages, I think that it is similar enough that the analogy should be made. Perhaps I should have made the analogy, but used a different term.

[-]Shmi80

I don't think your post has anything to do with functional programming or side effects of functions. You are operating at a different level of abstraction when discussing decision theories.

Yeah, I regret the language now.

We should look at this problem and think, ”I want to output A or B, but in such a way that has the side effect that the other copy of me outputs B or A respectively.” S could search through functions considering their output on input 1 and the side effects of that function. S might decide to run the UDT 1.1 algorithm, which would have the desired result.

This seems very similar to what I named "UDT2" on the decision theory mailing list. Here's how I described it:

How to formulate UDT2 more precisely is not entirely clear yet. Assuming the existence of a math intuition module which runs continuously to refine its logical uncertainties, one idea is to periodically interrupt it, and during the interrupt, ask it about the logical consequences of statements of the form "S, upon input X, becomes T at time t" for all programs T and t being the time at the end of the current interrupt. At the end of the interrupt, return T(X) for the T that has the highest expected utility according to the math intuition module's "beliefs". (One of these Ts should be equivalent to "let the math intuition module run for another period and ask again later".)

So aside from the unfortunately terminology, I think you're probably going in the right direction.

This seems very similar to what I named "UDT2" on the decision theory mailing list.

I never really got why UDT2 wasn't just a special case of UDT1, in which the set of outputs was restricted to outputs of the form "Turn into program T at time t". (This was Vladimir Nesov's immediate response on the mailing list.) I suppose that there should also be a corresponding "UDT2.1", in which the agent instead chooses among all input-output maps mapping inputs to outputs of the form "Turn into program T at time t".

Thanks.

Do you agree that after receiving input 1, S(1) cannot assume complete control over S(2)?

I don't see that UDT1.1 really does this. In UDT1.1, the agent tries to infer what will happen, conditioning on the agent's implementing a given input-output function. But "what will happen" doesn't just include whatever the agent has "complete control" over. Rather, "what will happen" includes whatever will happen throughout the entire world, including the things you are calling "side effects".

But the agent cannot choose any function. The agent has already seen the 1 and can therefore only choose the function value at 1

The agent has already seen the 1

The agent chooses the function prior to seeing the 1. The agent may have "received" the 1, but the agent hasn't processed the 1. The input has been "set aside" until after the function is chosen. The function is chosen without any regard for what the input was. Only after the function is chosen does the agent process the input, using that function.

That is why the two copies will pick the same function. Whether this means that one copy's choice "controls" the other copy's choice, or whether the other copy's choice is just a "side effect", seems to be just a matter of language.

Once the 1 has been processed, it might be too late. A single bit of irrelevant information seems easy to ignore, but what if the preferences of the agent after viewing the 1 are different from the preferences of the agent before viewing the 1. This might not be the case in this problem, but it is conceivable to me. Then the agent cannot and should not just forget the 1, unless he is forced to by some pre commitment mechanism from the agent before viewing the 1.

I think that in this problem it makes sense for the agent to pretend it did not see the 1, but this might not be true in all cases.

For example. If the situation was that 1 would be terminated if the letters matched and 2 would be terminated if the letters did not match, I would choose randomly so that I could not be taken advantage of by my other copy.

A single bit of irrelevant information seems easy to ignore, but what if the preferences of the agent after viewing the 1 are different from the preferences of the agent before viewing the 1. [...] Then the agent cannot and should not just forget the 1 ...

To be clear, the input was given to the agent, but the agent didn’t "look at it" prior to choosing an input-output function. Imagine that the agent was handed the input, but, before looking at it, the agent stored the input in external memory. Before looking at what the input was, the agent chooses an optimal input-output function f. Once such a function is chosen, and only then, does the agent look at what the input was. The agent then outputs whatever f maps the input to. (A few years back, I wrote a brief write-up describing this in more detail.)

Now, if, as you suggest, looking at the input will change the agent’s preferences, then this is all the more reason why the agent will want to choose its input-output map before looking at the input. For, suppose that the agent hasn’t looked at the input yet. (The input was stored, sight-unseen, in external memory.) If the agent’s future preferences will be different, then those future preferences will be worse than the present preferences, so far as the present agent is concerned. Any agent with different preferences might work at cross purposes to your own preferences, even if this agent is your own future self. Therefore, if you anticipate that your preferences are about to be changed, then that should encourage you to make all important decisions now, while you still have the “right” preferences.

(I assume that we’re talking about preferences over states of the world, and not “preferences” resulting from mere ignorance. My preference is to get a box with a diamond in it, not a box that I wrongly think has a diamond in it.)

... unless he is forced to by some pre commitment mechanism from the agent before viewing the 1.

I don’t think that “pre-commitment” is the right way to think about this. The agent begins the scenario running a certain program. If that program has the agent setting aside the input sight unseen and choosing an input-output function prior to looking at the input, and then following that input-output function, then that is just what the agent will do — not because of force or pre-commitment, but just because that is its program.

(I wrote a post a while back speculating on how this might “feel” to the agent “on the inside”. It shouldn’t feel like being forced to follow a previous commitment.)

I don't think we actually disagree on anything substantial.

I was partially going based on the fact that in Wei Dai's example, the agent was told the number 1, before he was even told what the experiment was. I think the nature of our disagreement is only on our interpretation of Wei Dai's thought experiment.

Do you agree with the following statement?

"UDT1.1 is good, but we have to be clear about what the input is. The input is all information that you have not yet received (or not yet processed). All the other information that you have should be viewed as part of the source code of you decision procedure, and may change your probabilities and/or your utilities."

"UDT1.1 is good, but we have to be clear about what the input is. The input is all information that you have not yet received (or not yet processed). All the other information that you have should be viewed as part of the source code of you decision procedure, and may change your probabilities and/or your utilities."

Yes, I agree.

I could quibble with the wording of the part after the last comma. It seems more in line with the spirit of UDT to say that, if an agent's probabilities or utilities "change", then really what happened is that the agent was replaced by a different agent. After all, the "U" in "UDT" stands for "updateless". Agents aren't supposed to update their probabilities or utilities. But this is not a significant point.

[-]lmm20

Simulating someone isn't a side-effect, it's an output (at least, depending on your anthropic assumptions). There's something weird going on here - what if Omega does the simulation inside a rotating black hole, or otherwise causally separated from you, it seems paradoxical that this simulation should have any effect on your behavior - but I think the issue is in your decision theory, not in the simulation function.

In Newcomb's problem, the affect on your behavior doesn't come from Omega's simulation function. Your behavior is modified by the information that Omega is simulating you. This information is either independent or dependent on the simulation function. If it is independent, this is not a side effect of the simulation function. If it is dependent, we can model this as an explicit effect of the simulation function.

While we can view the change in your behavior as a side effect, we don't need to. This article does not convince me that there is a benefit to viewing it as a side effect.

Your action is probably dependent only on the output of the Omega function (including its output in counterfactual worlds).

Your action is not dependent only on the output of the function in the actual world.

We can model it as an effect of the function, but not as an effect of the output. I noticed this, which is why I put in the statement:

No, I took one box because of the function from my actions to states of the box. The side effect is no way dependent on the interior workings of Omega, but only on the output of Omega’s function in counterfactual universes. Omega’s code does not matter. All that matters is the mathematical function from the input to the output.

If we define side effect as relative to the one output of the function, then both examples are side effects. If we define side effect as relative to the entire function, then only the second example is a side effect.

Actually, I take that back, I think that both examples are side effects. Your output is a side effect of the Omega function, because it is not just a dependent on what Omega does on different inputs, it is also dependent on what Omega does in counterfactual universes.

I am confused by this issue, and I am not trying to present a coherent solution as much as I am trying to stimulate discussion on thinking outside of the input-output model of decision theory.