Related to: Moral uncertainty (wiki), Moral uncertainty - towards a solution?, Ontological Crisis in Humans.

Moral uncertainty (or normative uncertainty) is uncertainty about how to act given the diversity of moral doctrines. For example, suppose that we knew for certain that a new technology would enable more humans to live on another planet with slightly less well-being than on Earth[1]. An average utilitarian would consider these consequences bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do? (LW wiki)

I have long been slightly frustrated by the existing discussions about moral uncertainty that I've seen. I suspect that the reason has been that they've been unclear on what exactly they mean when they say that we are "uncertain about which theory is right" - what is uncertainty about moral theories? Furthermore, especially when discussing things in an FAI context, it feels like several different senses of moral uncertainty get mixed together. Here is my suggested breakdown, with some elaboration:

Descriptive moral uncertainty. What is the most accurate way of describing my values? The classical FAI-relevant question, this is in a sense the most straightforward one. We have some set values, and although we can describe parts of them verbally, we do not have conscious access to the deep-level cognitive machinery that generates them. We might feel relatively sure that our moral intuitions are produced by a system that's mostly consequentialist, but suspect that parts of us might be better described as deontologist. A solution to descriptive moral uncertainty would involve a system capable of somehow extracting the mental machinery that produced our values, or creating a moral reasoning system which managed to produce the same values by some other process.

Epistemic moral uncertainty. Would I reconsider any of my values if I knew more? Perhaps we hate the practice of eating five-sided fruit and think that everyone who eats five-sided fruit should be thrown to jail, but if we found out that five-sided fruit made people happier and had no averse effects, we would change our minds. This roughly corresponds to the "our wish if we knew more, thought faster" part of Eliezer's original CEV description. A solution to epistemic moral uncertainty would involve finding out more about the world.

Intrinsic moral uncertainty. Which axioms should I endorse? We might be intrinsically conflicted between different value systems. Perhaps we are trying to choose whether to be loyal to a friend or whether to act for the common good (a conflict between two forms of deontology, or between deontology and consequentialism), or we could be conflicted between positive and negative utilitarianism. In its purest form, this sense of moral uncertainty closely resembles what would otherwise be called a wrong question, one where

you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question.  When it doesn't even seem possible to answer the question.

But unlike wrong questions, questions of intrinsic moral uncertainty are real ones that you need to actually answer in order to make a choice. They are generated when different modules within your brain generate different moral intuitions, and are essentially power struggles between various parts of your mind. A solution to intrinsic moral uncertainty would involve somehow tipping the balance of power in favor of one of the "mind factions". This could involve developing an argument sufficiently persuasive to convince most parts of yourself, or self-modifying in such a way that one of the factions loses its sway over your decision-making. (Of course, if you already knew for certain which faction you wanted to expunge, you wouldn't need to do it in the first place.) I would roughly interpret the "our wish ... if we had grown up farther together" part of CEV to be an attempt to model some of the social influences on our moral intuitions and thereby help resolve cases of intrinsic moral uncertainty.


This is a very preliminary categorization, and I'm sure that it could be improved upon. There also seem to exist cases of moral uncertainty which are hybrids of several categories - for example, ontological crises seem to be mostly about intrinsic moral uncertainty, but to also incorporate some elements of epistemic moral uncertainty. I also have a general suspicion that these categories still don't cut reality that well at the joints, so any suggestions for improvement would be much appreciated.

New Comment
15 comments, sorted by Click to highlight new comments since:

Nice post. Do I understand you correctly that what you call "Intrinsic Moral Uncertainty" is the feeling of unresolved conflict between subsystems of our moral-intuition-generators? If so, I'd suggest calling it "Mere internal conflict" or "Not finished computing" or something more descriptive than "Intrinsic".

Thanks!

Kind of, though "intrinsic uncertainty" also suggests the possibility that the subsystems might be generating moral intuitions which simply cannot be reconciled and that the conflict might be unresolvable unless one is willing to completely cut away or rewrite parts of their own mind. (Though this does not presuppose that the conflict really is unresolvable, merely that it might be.) That makes "not finished computing" somewhat ill-fitting of a name, since that seems to imply that the conflict could be eventually resolved. Not sure if "mere internal conflict" really is it, either. "Intrinsic" was meant to refer to this kind of conflict emerging from an agent holding mutually incompatible intrinsic values, and it being impossible to resolve the conflict via appeal to instrumental considerations.

Is your purpose to describe the underlying moral issue or what different issues feel like?

For instance:

  • I feel morally certain but in reality would change my view if strong evidence were presented.
  • I feel morally certain and won't change my mind.
  • I feel descriptive uncertainty, and if I just read the right moral formulation I would agree that it described me perfectly
  • I feel descriptive uncertainty, but actually have a deep internal conflict
  • I feel deep internal conflict, and am right about it
  • I feel deep internal conflict, but more epistemic information would resolve the issue

I think the issue may be that you're trying to categorize "how it feels now" with "my underlying morality, to which I have limited access" in the same system. Maybe two sets of categories are needed? For instance, the top level system can experience descriptive uncertainty, but the underlying reality cannot.

ETA: Here's my attempt at extending the categories to cover both conscious feelings and the underlying reality.

Conscious moral states:

  • Moral certainty -- I feel like I know the answer with no serious reservations
  • Basic moral uncertainty -- I feel like I don't know how to tackle the problem at all
  • Descriptive moral uncertainty -- I feel like I know this, but can't come up with a good description
  • Epistemic moral uncertainty -- I feel like I need more information to figure it out
  • Conflicted moral uncertainty -- I feel like there are two values competing

Subconscious (real? territory?) moral states:

  • Moral certainty -- I have a clear algorithm for this problem
  • Basic moral uncertainty -- I just don't have an answer at all
  • Conflicted moral uncertainty -- I have two or more systems which give different answers
  • Epistemic moral uncertainty -- I need more information to give a confident answer

Interesting. Your extended categorization seems like it could very possibly be useful - I'll have to think about it some more.

I think that if you tried to examine my different moral subsystems absent a scenario, they might be irreconcilable, but in practice they normally wind up deciding on something just because reality almost never throws me a moral problem that's a perfect toss-up. If I constrain my moral opinions to the domains of actions that I can take, I'll feel pulled in different directions, but there's almost always a winning faction eventually.

Kind of, though "intrinsic uncertainty" also suggests the possibility that the subsystems might be generating moral intuitions which simply cannot be reconciled and that the conflict might be unresolvable unless one is willing to completely cut away or rewrite parts of their own mind.

Don't you think that things being perfectly balanced in a way such that there is no resolution is sort of a measure zero set of outcomes? In drift-diffusion models of neural groups in human and animal brains arrive at decisions/actions (explained pretty well here), even if the drift term (tendency to eventually favor one outcome) is zero, the diffusion term (tendency to randomly select some outcome) would eventually result in a decision being made, with probability 1, where more subtle conflicts tend to take more time to resolve.

This is why I prefer to think of those situations as "not finished computing" rather than "intrinsically unresolvable".

Do you maybe have a different notion of resolving a conflict, that makes unresolvedness a sustainable situation?

Don't you think that things being perfectly balanced in a way such that there is no resolution is sort of a measure zero set of outcomes?

I don't really have any good data on this: my preliminary notion that some such conflicts might be unresolvable is mostly just based on introspection, but we all know how reliable that is. And even if it was reliable, I'm still young and it could turn out that my conflicts will eventually be resolved as well. So if there are theoretical reasons to presume that there will eventually be a resolution, I will update in that direction.

That said, based on a brief skim of the page you linked, the drift-diffusion model seems to mostly just predict that a person will eventually take some action - I'm not sure whether it excludes the possibility of a person taking an action, but regardless remaining conflicted of whether it was the right one. This seems to often be the case with moral uncertainty.

For example, my personal conflict gets rather complicated, but basically it's over the fact that I work in the x-risk field, which part of my brain considers the Right Thing To Do due to all the usual reasons that you'd expect. But I also have strong negative utilitarian intuitions which "argue" that life going extinct would in the long run be the right thing as it would eliminate suffering. I don't assign a very high probability on humanity actually surviving the Singularity regardless of what we do, so I don't exactly feel that my work is actively unethical, but I do feel that it might be a waste of time and that my efforts might be better spent on something that actually did reduce suffering while life on Earth still existed. This conflict keeps eating into my motivation and making me accomplish less, and I don't see it getting resolved anytime soon. Even if I did switch to another line of work, I expect that I would just end up conflicted and guilty over not working on AI risk.

(I also have other personal conflicts, but that's the biggest one.)

[-]Shmi40

Good post, though a few concrete examples would be nice.

A utilitarian who maximizes a non-equally weighted sum of everyone's utility faces uncertainty over what weight to give everyone including people not yet born and non-human lifeforms.

That's a tendentious way of expressing the uncertainty faced by this utilitarian agent. Why does he face uncertainty over what weight to give to non-human lifeforms, but not over what weight to give to other human beings?

This definitely seems to be a post-metaethics post: that is, it assumes something like the dominant EY-style metaethics around here (esp the bit about "intrinsic moral uncertainty"). That's fine, but it does mean that the discussion of moral uncertainty may not dovetail with the way other people talk about it.

For example, I think many people would gloss the problem of moral uncertainty as being unsure of which moral theory is true, perhaps suggesting that you can have a credence over moral theories much like you can over any other statement you are unsure about. The complication, then, is calculating expected outcomes when the value of an outcome may itself depend on which moral theory is true.

I'm not sure whether you'd class that kind of uncertainty as "epistemic" or "intrinsic".

You could also have metaethical uncertainty, which makes the whole thing even more complex.

For example, I think many people would gloss the problem of moral uncertainty as being unsure of which moral theory is true, perhaps suggesting that you can have a credence over moral theories much like you can over any other statement you are unsure about.

This would still require some unpacking over what they mean with a moral theory being true, though.

On descriptive moral uncertainty: it seems that the more accurately I try to describe my mode of moral thought, the more I have to add edge cases, if chains, and spaghetti reasoning.

In some sense this is just plain introspection failure. But it also seems to me that Nature is a crappy programmer.

Additionally, I do wonder sometimes if "what is moral?" isn't inherently a wrong question.

The discussions in moral uncertain normally use the consensual options of ethical theories(deontology, consequentialism), and then propose some kind of solution, like intertheoretical comparison. The decompositions posed in the post create more problems to solve, like the correct description of values. I assume the author would find some kind of consequentialism as a correct theory, with settle the third uncertain. Feel free to respond if thats not the case.

I skimmed that thesis before making this post. Although it's true that it discusses only a subset of the problems reviewed in this post, I found it problematic that it felt like the author never examined the question of what moral uncertainty really is before proposing methods for resolving it. Well, he did, but I think he avoided the most important questions. For instance:

Suppose that Jo is driving home (soberly, this time), but finds that the road is blocked by a large deer that refuses to move. Suppose that her only options are to take a long detour, which will cause some upset to the friends that she promised to meet, or to kill the deer, roll it to the side of the road, and drive on. She’s pretty sure that non-human animals are of no moral value, and so she’s pretty sure that she ought to kill the deer and drive on. But she’s not certain: she thinks there is a small but sizable probability that killing a deer is morally significant. If fact, she thinks it may be something like as bad as killing a human. Other than that, her ethical beliefs are pretty common-sensical.

As before, it seems like there is an important sense of ‘ought’ that is relative to Jo’s epistemic situation. It’s just that, in this case, the ‘ought’ is relative to Jo’s moral uncertainty, rather than relative to her empirical uncertainty. Moreover, it seems that we possess intuitive judgments about what decision-makers ought, in this sense of ought, to do: intuitively, to me at least, it seems that Jo ought not to kill the deer.

It’s this sense of ‘ought’ that I’m studying in this thesis. I don’t want to get into questions about the precise nature of this ‘ought’: for example, whether it is a type of moral ‘ought’ (though a different moral ‘ought’ from the objective and subjective senses considered above), or whether it is a type of rational ‘ought’. But this example, and others that we will consider throughout the thesis, show that, whatever the nature of this ‘ought’, it is an important concept to be studied.

(Emphasis mine)

And:

To recap: I assume that we have a decision-maker who has a credence distribution; she has some credence in a certain number of theories, each of which provides a ranking of options in terms of choice-worthiness. Our problem is how to combine these choice-worthiness rankings, indexed to degrees of credence, into one appropriateness-ranking: that is, our problem is to work out what choice-procedures are most plausible. [...]

Before moving on, we should distinguish subjective credences, that is, degrees of belief, from epistemic credences, that is, the degree of belief that one is epistemically justified in having, given one’s evidence. When I use the term ‘credence’ I refer to epistemic credences (though much of my discussion could be applied to a parallel discussion involving subjective credences); when I want to refer to subjective credences I use the term ‘degrees of belief’.

The reason for this is that appropriateness seems to have some sort of normative force: if it is most appropriate for someone to do something, it seems that, other things being equal, they ought, in the relevant sense of ‘ought’, to do it. But people can have crazy beliefs: a psychopath might think that a killing spree is the most moral thing to do. But there’s no sense in which the psychopath ought to go on a killing spree: rather, he ought to revise his beliefs. We can only capture that idea if we talk about epistemic credences, rather than degrees of belief.

But there seems to be no discussion of what it means to have a "degree of belief" in an ethical theory (or if there is, I missed it). I would expect that reducing the concept to the level of cognitive algorithms would be a necessary first step before we could say anything useful about it.