Hi all,

As part of my PhD I've written a paper developing a new approach to decision theory that I call Meta Decision Theory. The idea is that decision theory should take into account decision-theoretic uncertainty as well as empirical uncertainty, and that, once we acknowledge this, we can explain some puzzles to do with Newcomb problems, and can come up with new arguments to adjudicate the causal vs evidential debate. Nozick raised this idea of taking decision-theoretic uncertainty into account, but he did not defend the idea at length, and did not discuss implications of the idea.

I'm not yet happy to post this paper publicly, so I'll just write a short abstract of the paper below. However, I would appreciate written comments on the paper. If you'd like to read it and/or comment on it, please e-mail me at will dot crouch at 80000hours.org. And, of course, comments in the thread on the idea sketched below are also welcome.

 

Abstract

First, I show that our judgments concerning Newcomb problems are stakes-sensitive. By altering the relative amounts of value in  the transparent box and the opaque box, one can construct situations in which one should clearly one-box, and one can construct situations in which one should clearly two-box. A plausible explanation of this phenomenon is that our intuitive judgments are sensitive to decision-theoretic uncertainty as well as empirical uncertainty: if the stakes are very high for evidential decision theory (EDT) but not for Causal Decision theory (CDT) then we go with EDT's recommendation, and vice-versa for CDT over EDT.

Second, I show that, if we 'go meta' and take decision-theoretic uncertainty into account, we can get the right answer in both the Smoking Lesion case and the Psychopath Button case.

Third, I distinguish Causal MDT (CMDT) and Evidential MDT (EMDT). I look at what I consider to be the two strongest arguments in favour of EDT, and show that these arguments do not work at the meta level. First, I consider the argument that EDT gets the right answer in certain cases. In response to this, I show that one only needs to have small credence in EDT in order to get the right answer in such cases. The second is the "Why Ain'cha Rich?" argument. In response to this, I give a case where EMDT recommends two-boxing, even though two-boxing has a lower average return than one-boxing.

Fourth, I respond to objections. First, I consider and reject alternative explanations of the stakes-sensitivity of our judgments about particular cases, including Nozick's explanation. Second, I consider the worry that 'going meta' leads one into a vicious regress. I accept that there is a regress, but argue that the regress is non-vicious.

In an appendix, I give an axiomatisation of CMDT.

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 11:09 AM

Sorry if this is an uncomfortable question, but does your theory do anything UDT doesn't do? Or is the idea that this is a general process which both humans and AIs would use if a flaw in UDT is found, and if so, how does it differ from Nick Bostrom's parliamentary decision process? Wouldn't almost any AI or human which endorsed this theory be vulnerable to Pascal's Mugging because at least one subtheory woudl be vulnerable to it?

Don't worry, that's not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories - so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven't worked through it) meta updateless decision theory.)

UDT, as I understand it (and note I'm not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into account you should sometimes one-box and sometimes two-box, depending on the relative value of the contents of the two boxes. Also, UDT gets what most decision-theorists consider the wrong answer in the smoking lesion case, whereas the account I defend, meta causal decision theory, doesn't (or, at least, doesn't, depending on one's credences in first-order decision theories).

To illustrate, consider the case:

High-Stakes Predictor II (HSP-II) Box C is opaque; Box D, transparent. If the Predictor predicts that you choose Box C only, then he puts one wish into Box C, and also a stick of gum. With that wish, you save the lives of 1 million terminally ill children. If he predicts that you choose both Box C and Box D, then he puts nothing into Box C. Box D — transparent to you — contains an identical wish, also with the power to save the lives of 1 million children, so if one had both wishes one would save 2 million children in total. However, Box D contains no gum. One has two options only: choose Box C only, or both Box C and Box D.

In this case, intuitively, should you one box, or two box? My view is clear: that if someone one-boxes in the above case, they made the wrong decision. And it seems to me that this is best explained with appeal to decision-theoretic uncertainty.

Other questions: Bostrom's parliamentary model is different. Between EDT and CDT, the intertheoretic comparisons of value are easy, so there's no need to use the parliamentary analogy - one can just straightforwardly take an expectation over decision theories.

Pascal's Mugging (aka the "Fanaticism" worry). This is a general issue for attempts to take normative uncertainty into account in one's decision-making, and not something I discuss in my paper. But if you're concerned about Pascal's mugging and, say, think that a bounded Decision Theory is the best way to respond to the problem - then at the meta level you should also have a bounded decision theory (and at the meta meta level, and so on).

UDT is totally supposed to smoke on the smoking lesion problem. That's kinda the whole point of TDT, UDT, and all the other theories in the family.

It seems to me that your high-stakes predictor case is adequately explained by residual uncertainty about the scenario setup and whether Omega actually predicts you perfectly, which will yield two-boxing by TDT in this case as well. Literal, absolute epistemic certainty will lead to one-boxing, but this is a degree of certainty so great that we find it difficult to stipulate even in our imaginations.

I ought to steal that "stick of chewing gum vs. a million children" to use on anyone who claims that the word of the Bible is certain, but I don't think I've ever met anyone in person who said that.

Can't we just assume that whatever we do was predicted correctly? The problem does assume an 'almost certain' predictor. Shouldn't that make two-boxing the worst move?

Can't we just assume that whatever we do was predicted correctly? The problem does assume an 'almost certain' predictor. Shouldn't that make two-boxing the worst move?

Basically yes. The choice is a simple one, with two-boxing being the obviously stupid choice.

In what ways does this differ from Nozick's recommendation in Nature of Rationality where he combines the results from EDT and CDT but gives them different weights depending on the priors (about the applicability to the situation and truth value of each decision theory)?

From memory, Nozick explicitly disclaims the idea that his view might be a response to normative uncertainty. Rather, he claims that EDT and CDT both have normative force and so should both be taken into account. While this may appear to be window dressing, this will have fairly substantial impacts. In particular, no regress threatens Nozick but the regress issue is going to need to be responded to in the normative uncertainty case.

Thanks for mentioning this - I discuss Nozick's view in my paper, so I'm going to edit my comment to mention this. A few differences:

As crazy88 says, Nozick doesn't think that the issue is a normative uncertainty issue - his proposal is another first-order decision theory, like CDT and EDT. I argue against that account in my paper. Second, and more importantly, Nozick just says "hey, our intuitions in Newcomb-cases are stakes-sensitive" and moves on. He doesn't argue, as I do, that we can explain the problematic cases in the literature by appeal to decision-theoretic uncertainty. Nor does he use decision-theoretic uncertainty to respond to arguments in favour of EDT. Nor does he respond to regress worries, and so on.

Show us the money. Which is to say, what this stuff actually is. With axioms if necessary.

Otherwise I'm reduced to asking dumb questions like "why do we need this when UDT is probably better?"

What is decision-theoretic uncertainty? Uncertainty about which decision theory to use?

I'm guessing yes. I'm also guessing this paper is a spin off from the question of Moral Uncertainty. Arguments about Moral uncertainty experience similar kinds of fragility (vicious regress) that need addressing.

I just published a paper on exactly this topic in the journal Nous. It is called "What to Do When You Don't Know What to Do When You Don't Know What to Do..."

http://onlinelibrary.wiley.com/doi/10.1111/nous.12010/abstract