Moral uncertainty
In this sequence, I first overview key points from existing work on moral uncertainty - what is it? Why does it matter? How should we make decisions when morally uncertain?
I then move on to two extensions of this work: How should we make decisions when both morally and empirically uncertain? And how can we combine ideas covered thus far with established work on the value of information, to work out what moral (or empirical) learning to prioritise, and how much time and money to “spend” on it?
I plan to later add a post on a different way of conceptualising moral uncertainty, which may be of relevance for AI alignment work.
(I’m also considering later adding posts on:
- Various definitions, types, and sources of moral uncertainty.
- The idea of ignoring even very high credence in nihilism, because it’s never decision-relevant.
- Whether it could make sense to give moral realism disproportionate influence over our decisions (compared to antirealism), based on the idea that realism might view there as “more at stake” than antirealism does.
I’d be interested in hearing whether people think those threads are likely to be worth pursuing.)
(Note: I first expanded and then abandoned my plans for this sequence, as I got busy with other things, so now there are likely some outdated/out-of-order overviews, references to what I'll cover in "my next post", etc..)