Main claims:

  1. A lot of discussion of decision theories is really analysing them as decision-making heuristics for boundedly rational agents.
  2. Understanding decision-making heuristics is really useful.
  3. The quality of dialogue would be improved if it was recognised when they were being discussed as heuristics.

Epistemic status: I’ve had a “something smells” reaction to a lot of discussion of decision theory. This is my attempt to crystallise out what I was unhappy with. It seems correct to me at present, but I haven’t spent too much time trying to find problems with it, and it seems quite possible that I’ve missed something important. Also possible is that this just recapitulates material in a post somewhere I’ve not read.

Existing discussion is often about heuristics

Newcomb’s problem traditionally contrasts the decisions made by Causal Decision Theory (CDT) and Evidential Decision Theory (EDT). The story goes that CDT reasons that there is no causal link between a decision made now and the contents of the boxes, and therefore two-boxes. Meanwhile EDT looks at the evidence of past participants and chooses to one-box in order to get a high probability of being rich.

I claim that both of these stories are applications of the rules as simple heuristics to the most salient features of the case. As such they are robust to variation in the fine specification of the case, so we can have a conversation about them. If we want to apply them with more sophistication then the answers do become sensitive to the exact specification of the scenario, and it’s not obvious that either has to give the same answer the simple version produces.

First consider CDT. It has a high belief that there is no causal link between choosing to one- or two- box and Omega’s previous decision. But in practice, how high is this belief? If it doesn’t understand exactly how Omega works, it might reserve some probability to the possibility of a causal link, and this could be enough to tip the decision towards one-boxing.

On the other hand EDT should properly be able to consider many sources of evidence besides the ones about past successes of Omega’s predictions. In particular it could assess all of the evidence that normally leads us to believe that there is no backwards-causation in our universe. According to how strong this evidence is, and how strong the evidence that Omega’s decision really is locked in, it could conceivably two-box.

Note that I’m not asking here for a more careful specification of the set-up. Rather I’m claiming that a more careful specification could matter -- and so to the extent that people are happy to discuss it without providing lots more details they’re discussing the virtues of CDT and EDT as heuristics for decision-making rather than as an ultimate normative matter (even if they’re not thinking of their discussion that way).

Similarly So8res had a recent post which discussed Newcomblike problems faced by people, and they are very clear examples when the decision theories are viewed as heuristics. If you allow the decision-maker to think carefully through all the unconscious signals sent by her decisions, it’s less clear that there’s anything Newcomblike.

Understanding decision-making heuristics is valuable

In claiming that a lot of the discussion is about heuristics, I’m not making an attack. We are all boundedly rational agents, and this will very likely be true of any artificial intelligence as well. So our decisions must perforce be made by heuristics. While it can be useful to study what an idealised method would look like (in order to work out how to approximate it), it’s certainly useful to study heuristics and determine what their relative strengths and weaknesses are.

In some cases we have good enough understanding of everything in the scenario that our heuristics can essentially reproduce the idealised method. When the scenario contains other agents which are as complicated as ourselves or more so, it seems like this has to fail.

We should acknowledge when we’re talking about heuristics

By separating discussion of the decision-theories-as-heuristics from decision-theories-as-idealised-decision-processes, we should improve the quality of dialogue in both parts. The discussion of the ideal would be less confused by examples of applications of the heuristics. The discussion of the heuristics could become more relevant by allowing people to talk about features which are only relevant for heuristics.

For example, it is relevant if one decision theory tends to need a more detailed description of the scenario to produce good answers. It’s relevant if one is less computationally tractable. And we can start to formulate and discuss hypotheses such as “CDT is the best decision-procedure when the scenario doesn’t involve other agents, or only other agents so simple that we can model them well. Updateless Decision Theory is the best decision-procedure when the scenario involves other agents too complex to model well”.

In addition, I suspect that it would help to reduce disagreements about the subject. Many disagreements in many domains are caused by people talking past each other. Discussion of heuristics without labelling it as such seems like it could generate lots of misunderstandings.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 6:13 AM
[-][anonymous]10y90

Meta comment: upvoted because right or wrong I'd very much like to see more posts like this.

I upvoted this comment because I agree with Mark about upvoting discussion posts that I would like to see more of.

I down-voted this comment because it is a clever ploy for karma that rests on exploiting LessWrongers' sometimes unnecessary enthusiasm for increasingly abstract and self-referential forms of reasoning but otherwise adds nothing to the conversation.

Twist: By "this comment" I actually mean my comment, thereby making this a paraprosdokian.

(Mostly pasted from a conversation I had with esrogs)

While there's some sense in which we're eventually going to need to use decision making heuristics, and in which using CDT on a graphical model of a world is just a heuristic, there's also a sense in which we don't know what we're approximating yet or how well our existing DTs approximate it.

My interest is in figuring out what the idealized process we want to approximate is first, and then figuring out the heuristics. The whole "Newcomblike problems are the norm" thing is building towards the motivation of "this is why we need to better understand what we're approximating" (although it could also be used to motivate "this is why we need better heuristics", but that was not my point).

Your objection seems similar to Vaniver's, in the main thread, that CDT could find a causal connection between its choice and the contents of the boxes in the Newcomb problem. This appeals to the intuition that there is some connection between the choice and the boxes (which there is), but fails to notice that the connection is acausal.

Or, in other words, it's a good intuition that "something like the CDT algorithm" can solve Newcomb's problem if you just give it a "good enough" world-model that allows it to identify these connections. But this involves using a non-causal world model. And, indeed, it is these non-causal world models that we must use to capture the intuition that you can win at Newcomb's problem using a non-causal decision theory.

Whenever there are non-causal connections (as in Newcomb problems) you need to have a world model containing non-causal connections between nodes.

(Side note: EDT is underspecified, and various attempts to fully specify it can make it equivalent to CDT or TDT or UDT, but we only found the latter two specifications after discovering TDT/UDT. It doesn't seem very useful to me to say that EDT works well/poorly unless you better specify EDT.)

I feel like there's this problem where when I say "look at this clear-cut case of there being non-causal connections", some people respond "but Newcomb problems are unrealistic", and then when I say "look at these realistic cases where there are realistically acausal connections", others say "ah, but this is not clear cut" -- and that's what you're doing when you say

If you allow the decision-maker to think carefully through all the unconscious signals sent by her decisions, it’s less clear that there’s anything Newcomblike

I'm sympathetic to this claim, but hopefully you can see the thing that I'm trying to point to here, which is this: there really are scenarios where there are acausal logical connections (that we care about) in the world.

Surely you agree that information can propagate acausally, e.g. if I roll a die and write down the result in two envelopes and send one to alpha centauri and read the other after it gets there -- I can learn what is in the envelope on alpha centauri "faster than light"; the physical causal separation does not affect information propagation. These things are often, but not always, related.

Similarly, the connections in the world that I care about are related to the information that I have, not to the causal connections between them. These things often correspond, but not always.

It is in this sense that CDT is doing the wrong thing: it's not the "evaluate counterfactuals and pick the best option" part that's the problem, it's the "how do you construct the counterfactuals (and on what world-model)" that is the problem.

We will inevitably eventually need to use decision making heuristics, but at this point we don't even know what we're approximating, and We're decidedly not looking specifically for "good decision-making heuristics" right now. We're trying to figure out decision theory in an idealized/deterministic setting first, so that by the time we do resort to heuristics we'll have some idea about what it is we're trying to approximate.

I'm sympathetic to this claim, but hopefully you can see the thing that I'm trying to point to here, which is this: there really are scenarios where there are acausal logical connections (that we care about) in the world.

I agree with this -- I think the absentminded driver is a particularly clean-cut case.

I was partly trying to offer an explanation of what was going on in e.g. discussions of Newcomb's problem where people contrast CDT with EDT. Given that you say EDT isn't even fully specified, it seems pretty clear that they're interpreting it as a heuristic, but I'm not sure they're always aware of that.

Surely you agree that information can propagate acausally

Yes -- nice example.

We will inevitably eventually need to use decision making heuristics, but at this point we don't even know what we're approximating, and We're decidedly not looking specifically for "good decision-making heuristics" right now.

I'm not entirely convinced by this. We can evaluate heuristics by saying "how well does implementing them perform?" (which just needs us to have models of the world and of value). I certainly think we can make meaningful judgements that some heuristics are better than others without knowing what the idealised form is.

That said, I'm sympathetic to the idea that studying the idealised form might be more valuable (although I'm not certain about that). The thrust of my post arguing that understanding the heuristics is valuable was to make it clear that I was trying to clarify the fact that some people end up discussing heuristics without realising it, rather than to attack such people.

I'm more interested in idealized decision theories than in heuristics, because until we figure out the idealized part, we don't know what we're trying to approximate or how well we're approximating it. All my decision theory posts on LW, as well as many of other people's posts, have followed this approach.

Also I think that discussing things at the level of "heuristics" might lead people to misconceptions. For example, UDT is not just for interacting with other agents. It's necessary in scenarios where you are the only agent, like the Absent-Minded Driver problem.

I'm open to the idea that the idealised form is more worth studying. I still think that a substantial fraction of discussion relates to heuristics (for instance the linked post saying that Newcomblike problems are common), and that having a way to notice this and separate it off would improve dialogue.

The Absent-Minded Driver problem is a good example of a problem which doesn't seem to succumb to confusion about idealised/heuristics.

I should clarify that I didn't want to claim that UDT was just for interacting with agents. I wanted to show the space of statements we could start discussing. From a heuristic level, there at least seem to be some cases where it will be unnecessary to use UDT, and more complex than is needed. It would be nice if there were a simple characterisation of how to recognise if you were in a scenario where CDT might fail.

I'm not sure that "Newcomblike problems are common" was intended as an argument about heuristics. To me it's more of an argument about which idealization we want to be studying in the long run.

A simpler version of the argument could go like this. When we build an AI, it will be able to make indistinguishable copies of itself, and the existence of these copies might depend on the AI's decisions and on facts about the world. VNM utility maximization doesn't cover such situations, so we need a more general theory that's as mathematically nice and reduces to VNM utility maximization when you have only one copy.

To be more precise, by "copies" I mean multiple decision nodes belonging to the same information set, as described in this post. These can also arise from mechanisms other than copying, such as forgetting things (AMD problem), having one agent simulate another (Newcomb's problem), or just putting multiple agents in the same situation (anthropic problems).

Does that answer your question about when UDT should be used?

Nice post.

Fundamentally, the Newcomb problem is about weighing off our confidence in CDT versus the evidence about Omega's predictive skill. If we have confidence in both, but they point in different directions, then it's which we have more confidence in. This kind of trade off happens all the time, just not to this degree.

We could write it out in Jaynes' notation, and that might make it clear to those to whom it isn't already.