Hi all,
As part of my PhD I've written a paper developing a new approach to decision theory that I call Meta Decision Theory. The idea is that decision theory should take into account decision-theoretic uncertainty as well as empirical uncertainty, and that, once we acknowledge this, we can explain some puzzles to do with Newcomb problems, and can come up with new arguments to adjudicate the causal vs evidential debate. Nozick raised this idea of taking decision-theoretic uncertainty into account, but he did not defend the idea at length, and did not discuss implications of the idea.
I'm not yet happy to post this paper publicly, so I'll just write a short abstract of the paper below. However, I would appreciate written comments on the paper. If you'd like to read it and/or comment on it, please e-mail me at will dot crouch at 80000hours.org. And, of course, comments in the thread on the idea sketched below are also welcome.
Abstract
First, I show that our judgments concerning Newcomb problems are stakes-sensitive. By altering the relative amounts of value in the transparent box and the opaque box, one can construct situations in which one should clearly one-box, and one can construct situations in which one should clearly two-box. A plausible explanation of this phenomenon is that our intuitive judgments are sensitive to decision-theoretic uncertainty as well as empirical uncertainty: if the stakes are very high for evidential decision theory (EDT) but not for Causal Decision theory (CDT) then we go with EDT's recommendation, and vice-versa for CDT over EDT.
Second, I show that, if we 'go meta' and take decision-theoretic uncertainty into account, we can get the right answer in both the Smoking Lesion case and the Psychopath Button case.
Third, I distinguish Causal MDT (CMDT) and Evidential MDT (EMDT). I look at what I consider to be the two strongest arguments in favour of EDT, and show that these arguments do not work at the meta level. First, I consider the argument that EDT gets the right answer in certain cases. In response to this, I show that one only needs to have small credence in EDT in order to get the right answer in such cases. The second is the "Why Ain'cha Rich?" argument. In response to this, I give a case where EMDT recommends two-boxing, even though two-boxing has a lower average return than one-boxing.
Fourth, I respond to objections. First, I consider and reject alternative explanations of the stakes-sensitivity of our judgments about particular cases, including Nozick's explanation. Second, I consider the worry that 'going meta' leads one into a vicious regress. I accept that there is a regress, but argue that the regress is non-vicious.
In an appendix, I give an axiomatisation of CMDT.
Argh! Original post didn't go through (probably my fault), so this will be shorter than it should be:
First point:
CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff
Reason -> donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI/MIRI, or otherwise pursue careers with the aim of extinction risk mitigation. It's plausible that $1 to CEA generates significantly more than $1's worth of x-risk-value [note: I'm a trustee and founder of CEA].
Second point:
Don't forget CSER. My view is that they are even higher-impact than MIRI or FHI (though I'd defer to Seanoh if he disagreed). Reason: marginal donations will be used to fund program management + grantwriting, which would turn ~$70k into a significant chance of ~$1-$10mn, and launch what I think might become one of the most important research institutions in the world. They have all the background (high profile people on the board; an already written previous grant proposal that very narrowly missed out on being successful). High leverage!