Nisan comments on Welcome to Less Wrong! (July 2012) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (843)
Hi All,
I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.
I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.
I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one's marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.
I woudn't call myself a 'rationalist' without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we've got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I'm uncertain - there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I'm extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one 'adheres' to (whatever that means).
I'm glad you're here! Do you have any comments on Nick Bostrom and Toby Ord's idea for a "parliamentary model" of moral uncertainty?
Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it's wrong according to both theories). If we can make such comparisons, then we don't need the parliamentary model: we can just use expected utility theory.
Sometimes, though, it seems that such comparisons aren't possible. E.g. I add one person whose life isn't worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren't possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that!
Sorry if that was a bit of a complex response to a simple question!