What do we mean by “moral uncertainty”?
I was looking for a sentence like "We define moral uncertainty as ..." and nothing came up. Did I miss something?
I believe such a sentence is indeed lacking. One reason is that, as far as I can tell, there isn't really a crisp definition of moral uncertainty in terms of a small set of necessary and sufficient criteria. Instead, it's basically "Moral uncertainty is uncertainty about moral matters", which then has to be accompanied with a range of examples and counterexamples of the sort of thing we mean by that.
That's part of why I'm writing a series of posts on the various aspects of what we mean by moral uncertainty, rather than just putting a quick definition at the start of one post and then moving on to how to make decisions when morally uncertain. (Which is what I originally did for the earlier version of this other post, before receiving a comment there making a similar point to your one here! I think with such fuzzy terms it's somewhat hard to avoid such issues, though I do appreciate the feedback pushing me to keep trying harder :) )
Another reason such a sentence is lacking is that this post is intended to follow on from my prior one, where I open with a quote listing examples of moral uncertainties, and then write:
I consider the above quote a great starting point for understanding what moral uncertainty is; it gives clear examples of moral uncertainties, and contrasts these with related empirical uncertainties. From what I’ve seen, a lot of academic work on moral uncertainty essentially opens with something like the above, then notes that the rational approach to decision-making under empirical uncertainty is typically considered to be expected utility theory, then discusses various approaches for decision-making under moral uncertainty.
That’s fair enough, as no one article can cover everything, but it also leaves open some major questions about what moral uncertainty actually is.
So this post is meant to follow one in which many examples of moral uncertainty are given, and moral uncertainty is contrasted against various related concepts. Those together provide a better starting point than a "Moral uncertainty is defined as..." sentence can, given how fuzzy the concept of "moral uncertainty" is and how its definition would rely on other terms that do a lot of work (like what "moral matters" are).
But it's true that many people may read this post without having read that one, and without having a background familiarity with the term. So it may well be good to add near the start even just a sentence like "Moral uncertainty is uncertainty about moral matters", and perhaps an explicit note that I partly intend the meaning to become increasingly clear through the provision of various examples. I plan to touch up these posts once I'm done with the sequence of them, and I've made a note to maybe add something like that then.
It's also possible changing the title could help with that, but I didn't manage to think of anything that wasn't overly long or obscure and that better captured the content. (I did explicitly decide to avoid "What is moral uncertainty?", as that felt like even more of an oversell - one reasonably sized post can only tackle part of that question, not all of it.)
And if anyone has any particularly good ideas for snappy definitions or fitting titles, I'd be happy to hear them :)
Update: I'm now considering changing the title to "What kind of 'should' is involved in moral uncertainty?" It seems to me that's a bit of a weird title and it's less immediately apparent what it'd mean, but it might more accurately capture what's in this post. Open to people's thoughts on that.
I've just changed the title along those lines.
Just to give context for people reading the comments later, the original title was "What do we mean by "moral uncertainty"?", which I now realise poorly captured the contents of the post.
Instead, it's basically "Moral uncertainty is uncertainty about moral matters", which then has to be accompanied with a range of examples and counterexamples of the sort of thing we mean by that.
What need is there for a definition of "moral uncertainty"? Empirical uncertainty is uncertainty about empirical matters. Logical uncertainty is uncertainty about logical matters. Moral uncertainty is uncertainty about moral matters. These phrases mean these things in the same way that "red car" means a car that is red, and does not need a definition.
If one does not believe there are objective moral truths, then "Moral uncertainty is uncertainty about moral matters" might feel problematic. The problem lies not in "uncertainty" but in "moral matters". But that is an issue you have postponed.
In my experience, stating things outright and giving examples helps with communication. You might not need a definition, but the relevant question is would it improve the text for other readers?
I agree to an extent. I do think, in practice, "It's like empirical uncertainty, but for moral stuff" really is sufficient for many purposes, for most non-philosophers. But, as commenters on a prior post of mine said, there are some issues not explained by that, which are potentially worth unpacking and which some people would like unpacked. For example...
You note the ambiguity with the term "moral matters", but there's also the ambiguity in the term "uncertainty" (e.g., the risk-uncertainty distinction people sometimes make, or different types of probabilities that might feed into uncertainties), which will be the subject of my next post. And when we talk about moral uncertainty, we very likely want to know what we "should" do given uncertainty, so what we mean by "should" there is also important and relevant, and, as covered in this post, is debated in multiple ways. And then, as you say, there's also the question of what moral uncertainty can mean for antirealists.
And as I covered in an earlier post, there are many other concepts which are somewhat similar to moral uncertainty, so it seems worth pulling those concepts apart (or showing where the lines really are just unclear/arbitrary). E.g., some philosophers seem fairly adamant that moral uncertainty must be treated totally differently to empirical uncertainty (e.g., arguing we basically just have to "Do what's actually right", even if we have no idea what that is, and can't meaningfully take into account our current best guesses as to moral matters). I'd argue (as would people like MacAskill and Tarsney) that realising how hard it is to separate moral and empirical uncertainty helps highlight why that view is flawed.
Do we even need the concept "moral uncertainty"? Would the more complete phrases "uncertainty of moral importance" be better, to distinguish from "uncertainty of effects of an action", which is just plain old rational uncertainty.
Not sure I understand what you mean there. The term "Moral uncertainty" is (I believe) meant to be analogous to the term "empirical uncertainty", which was already established, and I think it covers what you mean by "uncertainty of moral importance", so I'm not sure what we'd come up with another, different-sounding, longer term.
Also, "uncertainty of moral importance" might make it sound like we want to just separately consider how morally important each given act may be. But it could be far more efficient to think that we're "morally uncertain" about things like the moral status of animals or whether to believe utilitarianism or virtue ethics, and then have our judgement of the "moral importance" of many different actions informed by that more general moral uncertainty. So I think "moral uncertainty" is also clearer/less misleading.
This is again analogous to empirical uncertainty, I believe. We don't want to just track our uncertainty about the effects of each given action. It's more natural and efficient to also track our uncertainty about certain states of the world (e.g., how many people are working on AGI and how many are working on AI safety), and have that feed into our uncertainty about the effects of specific actions (e.g. funding a certain AI safety project).
I also don't believe I've come across the term "rational uncertainty" before. It seems to me that we'd have empirical and moral uncertainty (as well as perhaps some other types of uncertainty, like meta-ethical uncertainty), and then put that together with a decision theory (which we may also have some uncertainty about), and get out what we rationally should do. See my two prior posts. I guess being uncertain about rationality might be like being uncertain about what decision theory to use to translate preferences and probability distributions into actions, but then we should call that decision-theoretic uncertainty. Or perhaps you mean "cases in which it is rational to be uncertain", in which case it seems that would be a subset of all other types of uncertainty.
Let me know if I'm misunderstanding you, though.
30 seconds of googling gave me this link, which might not be anything exceptional but at least it offers a couple of relevant definitions:
what should I do, given that I don’t know what I should do?
and
what should I do when I don’t know what I should do?
and later a more focused question
what am I (or we) permitted to do, given that I (or we) don’t know what I (or we) are permitted to do
At least they define what they are working on...
Those questions all help point to the concept at hand, but they're actually all about decision-making under moral uncertainty, rather than moral uncertainty itself. In the same way, empirical uncertainty is uncertainty about things like whether a stock will increase in price tomorrow, which can then be blended with other things (like decision theory and your preferences) to answer questions like "What should I do, given that I don't know whether this stock will increase in price tomorrow?"
I did start with a post on decision-making under moral uncertainty, but then got the feedback (which I've now realised was very much on point) that it would be worth stepping back quite a bit to discuss what moral uncertainty itself actually is.
Additionally, I'd say that none of those quoted questions at all disentangle moral from empirical uncertainty. For example, I could be 100% certain in some moral theory where infringing people's rights is bad but everything else is fine, but still not know what I should do, because I don't know which of a set of actions is least likely to end up infringing rights (an empirical uncertainty). So it'd be necessary to modify those questions to something like "What should I do, given that I don’t know what's morally right, despite knowing the relevant empirical facts?" ...which now involves two other terms worth defining/distinguishing, and so here we're getting into the complexities I mentioned :) (And back into the sort of stuff that my post prior to this one unpacked.)
But all that said, I think it probably is a good idea to open this post with something to point at the concept at hand, for those readers who didn't read the prior post and are relatively unfamiliar with the term "moral uncertainty". So I've added two short sentences at the start to accomplish that objective.
(For anyone who's for some reason interested, the original version of this post is here.)
This post follows on from my prior post; consider reading that post first.
We are often forced to make decisions under conditions of uncertainty. This may be empirical uncertainty (e.g., what is the likelihood that nuclear war would cause human extinction?), or it may be moral uncertainty (e.g., does the wellbeing of future generations matter morally?).
In my prior post, I discussed overlaps with and distinctions between moral uncertainty and related concepts. In this post, I continue my attempt to clarify what moral uncertainty actually is (rather than how to make decisions when morally uncertain, which is covered later in the sequence). Specifically, here I’ll discuss:
An important aim will be simply clarifying the questions and terms themselves. That said, to foreshadow, the tentative “answers” I’ll arrive at are:
This post doesn’t explicitly address what types of moral uncertainty would be meaningful for moral antirealists and/or subjectivists; I discuss that topic in a separate post.[1]
Epistemic status: The concepts covered here are broad, fuzzy, and overlap in various ways, making definitions and distinctions between them almost inevitably debatable. Additionally, I’m not an expert in these topics (though I have now spent a couple weeks mostly reading about them). I’ve tried to mostly collect, summarise, and synthesise existing ideas (from academic philosophy and the LessWrong and EA communities). I’d appreciate feedback or comments in relation to any mistakes, unclear phrasings, etc. (and just in general!).
Objective or subjective?
(Note: What I discuss here is not the same as the objectivism vs subjectivism debate in metaethics.)
As I noted in a prior post:
Hilary Greaves & Owen Cotton-Barratt give an example of this distinction in the context of empirical uncertainty:
Greaves & Cotton-Barratt then make the analogous distinction for moral uncertainty:
(This objective vs subjective distinction seems to me somewhat similar - though not identical - to the distinction between ex post and ex ante thinking. We might say that Alice made the right decision ex ante - i.e., based on what she knew when she made her decision - even if it turned out - ex post - that the other decision would’ve worked out better.)
MacAskill notes that, in both the empirical and moral contexts, “The principal argument for thinking that there must be a subjective sense of ‘ought’ is because the objective sense of ‘ought’ is not sufficiently action-guiding.” He illustrates this in the case of moral uncertainty with the following example:
Clearly, the best outcome would occur if Susan does C. But she doesn’t know that that would cause the best outcome, because she doesn’t know what the “true moral theory” is. She thus has no way to act on the advice “Just do what is objectively morally right.” Meanwhile, as MacAskill notes, “it seems it would be morally reckless for Susan not to choose option B: given what she knows, she would be risking severe wrongdoing by choosing either option A or option C” (emphasis added).
To capture the intuition the Susan should choose option B, and to provide actually followable guidance for action, we need to accept that there is a subjective sense of “should” (or of “ought”) - a sense of “should” that depends in part on what one believes. (This could also be called a “belief-relative” or “credence-relative” sense of “should”.)[2]
An additional argument in favour of accepting that there’s a subjective “should” in relation to moral uncertainty is consistency with how we treat empirical uncertainty, where most people accept that there’s a subjective “should”.[3] This argument is made regularly, including by MacAskill and by Greaves & Cotton-Barratt, and it seems particularly compelling when one considers that it’s often difficult to draw clear lines between empirical and moral uncertainty (see my prior post). That is, if it’s often hard to say whether an uncertainty is empirical or moral, it seems strange to say we should accept a subjective “should” under empirical uncertainty but not under moral uncertainty.
Ultimately, most of what I’ve read on moral uncertainty is premised on there being a subjective sense of “should”, and much of this sequence will rest on that premise also.[4] As far as I can tell, this seems necessary if we are to come up with any meaningful, action-guiding approaches for decision-making under moral uncertainty (“metanormative theories”).
But I should note that some writers do appear to argue that there’s only an objective sense of “should” (one example, I think, is Weatherson, though he uses different language and I’ve only skimmed his paper). Furthermore, while I can’t see how this could lead to action-guiding principles for making decisions under uncertainty, it does seem to me that it’d still allow for resolving one’s uncertainty. In other words, if we do recognise only objective “oughts”:
Rational or moral?
For example, in the above example of Susan the doctor, are we wondering what she rationally ought to do, given her moral uncertainty about the moral status of chimpanzees, or what she morally ought to do?
It may not matter either way
Unfortunately, even after having read up on this, it’s not actually clear to me what the distinction is meant to be. In particular, I haven’t come across a clear explanation of what it would mean for the “should” or “ought” to be moral. I suspect that what that would mean would be partly a matter of interpretation, and that some definitions of a “moral” should could be effectively the same as those for a “rational” should. (But I should note that I didn’t look exhaustively for such explanations and definitions.)
Additionally, both Greaves & Cotton-Barratt and MacAskill explicitly avoid the question of whether what one “ought to do” under moral uncertainty is a matter of rationality or morality.[5] This does not seem to at all hold them back from making valuable contributions to the literature on moral uncertainty (and, more specifically, on how to make decisions when morally uncertain).
Together, the above points make me inclined to believe (though with low confidence) that this may be a “merely verbal” debate with no real, practical implications (at least while the words involved remain as fuzzy as they are).
However, I still did come to two less-dismissive conclusions:
I provide my reasoning behind these conclusions below, though, given my sense that this debate may lack practical significance, some readers may wish to just skip to the next section.
A rational “should” likely works
Bykvist writes:
It seems to me that that reasoning makes perfect sense, and that we can have valid, meaningful, action-guiding principles about what one rationally (and subjectively) should do given one’s moral uncertainty. This seems further supported by the approach Christian Tarsney takes, which seems to be useful and to also treat the relevant “should” as a rational one.
Furthermore, MacAskill seems to suggest that there’s a correlation between (a) writers fully engaging with the project of working out action-guiding principles for decision-making under moral uncertainty and (b) writers considering the relevant “should” to be rational (rather than moral):
A moral “should” may or may not work
I haven’t seen any writer (a) explicitly state that they understand the relevant “should” to be a moral one, and then (b) go on to fully engage with the project of working out meaningful, action-guiding principles for decision-making under moral uncertainty. Thus, I have an absence of evidence that one can engage in that project while seeing the “should” as moral, and I take this as (very weak) evidence that one can’t engage in that project while seeing the “should” that way.
Additionally, as noted above, MacAskill writes that Weatherson and Harman (who seem fairly dismissive of that project) see the relevant “should” as a moral one. Arguably, this is evidence that that project of finding such action-guiding principles won’t make sense if we see the “should” as moral (rather than rational). However, I consider this to also be very weak evidence, because:
Closing remarks
In this post, I’ve aimed to:
I hope this has helped give readers more clarity on the seemingly neglected matter of what we actually mean by moral uncertainty. (And as always, I’d welcome any feedback or comments!)
My next posts will continue in a similar vein, but this time building to the question of whether, when we’re talking about moral uncertainty, we’re actually talking about moral risk rather than about moral (Knightian) uncertainty - and whether such a distinction is truly meaningful. (To do so, I'll first discuss the risk-uncertainty distinction in general, and the related matter of unknown unknowns, before applying these ideas in the context of moral risk/uncertainty in particular.)
But the current post is still relevant for many types of moral antirealist. As noted in my last post, this sequence will sometimes use language that may appear to endorse or presume moral realism, but this is essentially just for convenience. ↩︎
We could further divide subjective normativity up into, roughly, “what one should do based on what one actually believes” and “what one should do based on what it would be reasonable for one to believe”. The following quote, while not directly addressing that exact distinction, seems relevant:
(I found that quote in this comment, where it’s attributed to MacAskill’s BPhil thesis. Unfortunately, I can’t seem to access that thesis, including via Wayback Machine.) ↩︎
Though note that Greaves and Cotton-Barratt write:
↩︎In the following quote, Bykvist provides what seems to me (if I’m interpreting it correctly) to be a different way of explaining something similar to the objective vs subjective distinction.
Yet another (and I think similar) way of framing this sort of distinction could make use of the following two terms: “A criterion of rightness tells us what it takes for an action to be right (if it’s actions we’re looking at). A decision procedure is something that we use when we’re thinking about what to do” (Askell).
Specifically, we might say that the true first-order moral theory provides objective “criteria of rightness”, but that we don’t have direct access to what these are. As such, we can use a second-order “decision procedure” that attempts to lead us to take actions that are close as possible to the best actions (according to the unknown criteria of rightness). To do so, this decision procedure must make use of our credences (beliefs) in various moral theories, and is thus subjective. ↩︎
Greaves & Cotton-Barratt write: “For the purpose of this article, we will [...] not take a stand on what kind of “should” [is involved in cases of moral uncertainty]. Our question is how the “should” in question behaves in purely extensional terms. Say that an answer to that question is a metanormative theory.”
MacAskill writes: “I introduce the technical term ‘appropriateness’ in order to remain neutral on the issue of whether metanormative norms are rational norms, or some other sort of norms (though noting that they can’t be first-order norms provided by first-order normative theories, on pain of inconsistency).” ↩︎