Decision Theories: A Semi-Formal Analysis, Part II
Or: Causal Decision Theory and Substitution
Previously:
0. Decision Theories: A Less Wrong Primer
1. The Problem with Naive Decision Theory
Summary of Post: We explore the role of substitution in avoiding spurious counterfactuals, introduce an implementation of Causal Decision Theory and a CliqueBot, and set off in the direction of Timeless Decision Theory.
In the last post, we showed the problem with what we termed Naive Decision Theory, which attempts to prove counterfactuals directly and pick the best action: there's a possibility of spurious counterfactuals which lead to terrible decisions. We'll want to implement a decision theory that does better; one that is, by any practical definition of the words, foolproof and incapable of error...

I know you're eager to get to Timeless Decision Theory and the others. I'm sorry, but I'm afraid I can't do that just yet. This background is too important for me to allow you to skip it...
Over the next few posts, we'll create a sequence of decision theories, each of which will outperform the previous ones (the new ones will do better in some games, without doing worse in others0) in a wide range of plausible games.
Decision Theories: A Semi-Formal Analysis, Part I
Or: The Problem with Naive Decision Theory
Previously: Decision Theories: A Less Wrong Primer
Summary of Sequence: In the context of a tournament for computer programs, I give almost-explicit versions of causal, timeless, ambient, updateless, and several other decision theories. I explain the mathematical considerations that make decision theories tricky in general, and end with a bunch of links to the relevant recent research. This sequence is heavier on the math than the primer was, but is meant to be accessible to a fairly general audience. Understanding the basics of game theory (and Nash equilibria) will be essential. Knowing about things like Gödel numbering, quining and Löb's Theorem will help, but won't be required.
Summary of Post: I introduce a context in which we can avoid most of the usual tricky philosophical problems and formalize the decision theories of interest. Then I show the chief issue with what might be called "naive decision theory": the problem of spurious counterfactual reasoning. In future posts, we'll see how other decision theories get around that problem.
In my Decision Theory Primer, I gave an intuitive explanation of decision theories; now I'd like to give a technical explanation. The main difficulty is that in the real world, there are all sorts of complications that are extraneous to the core of decision theory. (I'll mention more of these in the last post, but an obvious one is that we can't be sure that our perception and memory match reality.)
In order to avoid such difficulties, I'll need to demonstrate decision theory in a completely artificial setting: a tournament among computer programs.

No Universal Probability Space
This afternoon I heard a news story about a middle eastern country where one person said of the defenses for a stockpile of nuclear weapons, "even if there is only a 1% probability of the defenses failing, we should do more to strengthen them given the consequences of their failure". I have nothing against this person's reasoning, but I do have an issue with where that 1% figure came from.
The statement above and others like it share a common problem: they are phrased such that it's unclear over what probability space the measure was taken. In fact, many journalist and other people don't seem especially concerned by this. Even some commenters on Less Wrong give little indication of the probability space over which they give a probability measure of an event, and nobody calls them on it. So what is this probability space they are giving probability measurements over?
If I'm in a generous mood, I might give the person presenting such a statement the benefit of the doubt and suppose they were unintentionally ambiguous. On the defenses of the nuclear weapon stockpile, the person might have meant to say "there is only a 1% probability of the defenses failing over all attacks", as in "in 1 attack out of every 100 we should expect the defenses to fail". But given both my experiences with how people treat probability and my knowledge of naive reasoning about probability, I am dubious of my own generosity. Rather, I suspect that many people act as though there were a universal probability space over which they may measure the probability of any event.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)