I recently finished a 9-post sequence on moral anti-realism over on the Effective Altruism Forum. This introduction explains my goals in writing the sequence and summarizes its main insights.
A little further down, I will comment on which posts are most worth reading for readers with particular interests since I expect many readers to be partially interested but unwilling to commit to reading the entire sequence.
Why I wrote this sequence
I had several goals in writing this sequence.
Laying out why moral anti-realism is likely correct
Building a framework for moral reasoning under moral anti-realism
Providing a shared basis for (meta)ethical discussion
Highlighting the implications for effective altruism
The first bullet point is self-explanatory, but I’ll say more about the second, third, and fourth bullet points.
Building a framework for moral reasoning under moral anti-realism
People build self-consistent philosophical frameworks to reason about the things they care about. Apparent shortcomings of a framework get patched as proponents find ways to accommodate them. For that reason, and because blindspots are generally hard to locate from within a given framework, I believe that the best way to change someone’s mind on realism vs. anti-realism is not by focusing on a single locus of disagreement, but by having people inhabit each others’ entire reasoning frameworks.
This is why, next to arguments against moral realism, I also outline how I would reason about morality as an anti-realist. Perhaps the strongest argument for moral anti-realism is that we can still reason about morality comprehensively and satisfyingly. If anything, I think figuring out moral reasoning under anti-realism is exciting. I found writing these parts of the sequence particularly interesting because it involved conceptual engineering, and the result feels like progress. Writing the posts forced me to think about moral reasoning in more detail than before.
Providing a shared basis for (meta)ethical discussion
Normative-ethical disagreements often spiral into metaethical disputes, which can be notoriously intractable when people seem to be using words differently or fail to understand each others’ concepts. I wrote this sequence hoping it could serve as a shared basis for discussion. When two people want to hash out their ethical or metaethical views, using this sequence (or select parts) as preparatory reading should make the discussion much more fruitful.
That was my intent, anyway. I’m not sure if I succeeded – an obvious failure mode would be if people read the sequence and mainly spend time discussing, “What do you think Lukas meant here?”
Highlighting the implications for effective altruism
Lastly, I wrote this sequence to point out that some effective altruists might be reasoning about morality in ways they wouldn’t reflectively endorse. I’ll now list some key takeaways from my sequence to give examples.
Note that many people in effective altruism already endorse moral anti-realism, so they may already agree with these “implications.” (But also, not all moral anti-realists think alike.)
Morality is “real” but under-defined
In theism vs. atheism debates, atheists don’t replace ‘God’ with anything when they stop believing in him. By contrast, in realism vs. anti-realism debates, anti-realists continue to think there’s “structure” to the domain in question. What changes is how they interpret and relate to that structure. Accordingly, moral anti-realism doesn’t mean “anything goes.” Therefore, the label “nihilism,” which some people use synonymously with “normative anti-realism,” seems uncalled for. The version of anti-realism I defend in my sequence fits the slogan “Morality is ‘real’ but under-defined.” Under-definedness means that there are multiple defensible answers to some moral issues. In particular, people may come away with different moral beliefs depending on their evaluative criteria, what they’re interested in, which perspectives they choose to highlight, etc.
For further discussion, see posts 2 and 6.
There’s no thoroughgoing wager for moral realism
A common intuition says that if morality is under-defined, what we do matters a lot less. People with this intuition may employ the strategy “Wagering on moral realism” – they might act as though moral realism is true despite suspecting that it isn’t. In my sequence, I discuss two wagers for two types of moral realism. I argue that the wager for irreducible normativity only applies if one adheres “fanatically” to a philosophically controversial interpretation of moral terminology. Secondly, the wager for naturalist moral realism connects morality to tangible things we care about, but these things would continue to matter also under moral anti-realism. (The sole difference is that according to anti-realism, not all philosophically sophisticated reasoners will favor the same systematization of target concepts like “altruism/doing good impartially.”) Therefore, the naturalist moral realism wager only applies to people who haven’t yet formed moral convictions.
For further discussion, see posts 3 and 4 (for the irreducible normativity wager), and post 9 (for the naturalist moral realism wager).
Moral uncertainty implies metaethical uncertainty or confident moral anti-realism
Not everything people call “moral realism” comes with action-relevant implications for effective altruists. If we consider how one could attain justified confidence in moral realism “worthy of the name,” we find – so I argue – that such confidence appears incompatible with remaining morally uncertain. Another way to say this is that the main route to moral realism is developing confidence in some object-level ethical theory.
For further discussion, see post 6.
Mistaken metaethics can inspire poorly-grounded moral views
Peter Singer wrote an essay in 1973 on why metaethics perhaps doesn’t deserve too much attention. In many ways, I still agree with this. However, some people may believe that metaethical disagreements are entirely unimportant. I disagree because there are situations where someone’s metaethical beliefs can directly cause them to reason about morality in ways they wouldn’t otherwise endorse. For example, (false) belief in moral realism can induce more philosophical “bullet biting” than someone may otherwise be prone to accept, and it can make someone stay “passive” in their moral reasoning, which could prevent them from forming convictions about what to value.
For further discussion, see posts 7 and 9.
Peer disagreement on moral questions isn’t always a cause of concern
If moral realism is true, you probably want to update towards the moral views of philosophically knowledgeable people and good/sophisticated reasoners. On the other hand, if moral anti-realism is true, you want to update to people who are all of the above and share your most fundamental intuition(s) about how to reason or what to value. By “your most fundamental intuition(s),” I mean things you are confident you’d never give up unless you absolutely had to. (Note that some people may not have “most fundamental intuitions” in that sense; finding a proposition “very counterintuitive,” for instance, is too weak to qualify.)
For further discussion, see posts 8 and 9.
“Moral uncertainty” is important but quite the rabbit hole
There’s a place for moral uncertainty under moral anti-realism. The critical question is, “What’s the object of our uncertainty? What is it that we can get right or wrong?” One answer is that someone can be right or wrong about their “idealized values” – what they’d value if they were “perfectly wise and informed.” However, going into the details of “perfectly wise and informed,” that is, when we specify hypothetical procedures for our moral reflection, we see that moral reflection is more an art than a science. Without a “true morality,” there’s no obvious endpoint to conclude one’s reflection. There are pitfalls to avoid and many judgment calls to make. In particular, one has to balance the possibility of staying too passive and ending up with under-defined values vs. the possibility of forming convictions too early without understanding enough of the moral option space.
For further discussion, see post 9.
Which posts are the most interesting for you?
If you’re new to the topic, I recommend starting with post 1.
If you’re a non-naturalist moral realist, your views are the furthest away from mine. I recommend most of the posts in the sequence except posts 1 and 7. (Post 1 is skippable for people who are already well-versed in moral philosophy, and post 7 is primarily of interest to moral realist hedonist utilitarians.)
If you’re a naturalist moral realist, you may be interested in posts 6, 8, and 9, as well as post 7 in case arguments for hedonism play a role in your moral realism. You may also want to read what I say about moral naturalism in post 1, since I consider some versions of it too watered down to qualify as “moral realism.”
If you’re already a moral anti-realist, but you’re interested in my specific moral reasoning framework, I recommend reading posts 8 and 9. I recommend the same posts to AI alignment researchers working on understanding “human values.”
Post Summaries
Here are some quick summaries of the posts, in order:
I explain why I’m uninterested in the linguistic analysis of moral claims
I discuss different types of “moral realism” and why not all of them would have action-relevant implications for effective altruists
I describe two versions of moral realism I consider “worthy of the name:” moral realism based on irreducible normativity and (a version of) naturalist moral realism
I analyze various ways to interpret irreducible normativity as a concept, focusing primarily on arguments about the reference of words (“how words get their meaning”)
I conclude that these interpretations are either meaningless, pointless in practice, or that they coincide (for practical purposes) with naturalist moral realism
I argue that moral anti-realism can be existentially satisfying, i.e., that giving up on irreducibly normative concepts isn’t too costly for (most of) us
I differentiate between two ways of justifying hedonist axiology and explain why I don’t find them compelling
I highlight that arguments in favor of hedonist axiology often seem to presuppose moral realism, which doesn’t make sense according to the insight from post 6
l present a proposal for replacing the broad concept “moral uncertainty” with related, more detailed ideas
These ideas are: deferring to moral reflection (and uncertainty over one’s “idealized values”); having under-defined values (deliberately or by accident); metaethical uncertainty (and wagering on moral realism)
I discuss the intricacies of specifying reflection procedures (what we mean by “being perfectly wise and informed”)
I discuss idealized values and the degree they’re chosen or discovered
With the help of a mountain analogy, I discuss the naturalist moral realism wager and conclude that this type of realism doesn’t always trump the values we’d adopt under moral anti-realism
I recently finished a 9-post sequence on moral anti-realism over on the Effective Altruism Forum. This introduction explains my goals in writing the sequence and summarizes its main insights.
A little further down, I will comment on which posts are most worth reading for readers with particular interests since I expect many readers to be partially interested but unwilling to commit to reading the entire sequence.
Why I wrote this sequence
I had several goals in writing this sequence.
The first bullet point is self-explanatory, but I’ll say more about the second, third, and fourth bullet points.
Building a framework for moral reasoning under moral anti-realism
People build self-consistent philosophical frameworks to reason about the things they care about. Apparent shortcomings of a framework get patched as proponents find ways to accommodate them. For that reason, and because blindspots are generally hard to locate from within a given framework, I believe that the best way to change someone’s mind on realism vs. anti-realism is not by focusing on a single locus of disagreement, but by having people inhabit each others’ entire reasoning frameworks.
This is why, next to arguments against moral realism, I also outline how I would reason about morality as an anti-realist. Perhaps the strongest argument for moral anti-realism is that we can still reason about morality comprehensively and satisfyingly. If anything, I think figuring out moral reasoning under anti-realism is exciting. I found writing these parts of the sequence particularly interesting because it involved conceptual engineering, and the result feels like progress. Writing the posts forced me to think about moral reasoning in more detail than before.
Providing a shared basis for (meta)ethical discussion
Normative-ethical disagreements often spiral into metaethical disputes, which can be notoriously intractable when people seem to be using words differently or fail to understand each others’ concepts. I wrote this sequence hoping it could serve as a shared basis for discussion. When two people want to hash out their ethical or metaethical views, using this sequence (or select parts) as preparatory reading should make the discussion much more fruitful.
That was my intent, anyway. I’m not sure if I succeeded – an obvious failure mode would be if people read the sequence and mainly spend time discussing, “What do you think Lukas meant here?”
Highlighting the implications for effective altruism
Lastly, I wrote this sequence to point out that some effective altruists might be reasoning about morality in ways they wouldn’t reflectively endorse. I’ll now list some key takeaways from my sequence to give examples.
Note that many people in effective altruism already endorse moral anti-realism, so they may already agree with these “implications.” (But also, not all moral anti-realists think alike.)
Morality is “real” but under-defined
In theism vs. atheism debates, atheists don’t replace ‘God’ with anything when they stop believing in him. By contrast, in realism vs. anti-realism debates, anti-realists continue to think there’s “structure” to the domain in question. What changes is how they interpret and relate to that structure. Accordingly, moral anti-realism doesn’t mean “anything goes.” Therefore, the label “nihilism,” which some people use synonymously with “normative anti-realism,” seems uncalled for. The version of anti-realism I defend in my sequence fits the slogan “Morality is ‘real’ but under-defined.” Under-definedness means that there are multiple defensible answers to some moral issues. In particular, people may come away with different moral beliefs depending on their evaluative criteria, what they’re interested in, which perspectives they choose to highlight, etc.
For further discussion, see posts 2 and 6.
There’s no thoroughgoing wager for moral realism
A common intuition says that if morality is under-defined, what we do matters a lot less. People with this intuition may employ the strategy “Wagering on moral realism” – they might act as though moral realism is true despite suspecting that it isn’t. In my sequence, I discuss two wagers for two types of moral realism. I argue that the wager for irreducible normativity only applies if one adheres “fanatically” to a philosophically controversial interpretation of moral terminology. Secondly, the wager for naturalist moral realism connects morality to tangible things we care about, but these things would continue to matter also under moral anti-realism. (The sole difference is that according to anti-realism, not all philosophically sophisticated reasoners will favor the same systematization of target concepts like “altruism/doing good impartially.”) Therefore, the naturalist moral realism wager only applies to people who haven’t yet formed moral convictions.
For further discussion, see posts 3 and 4 (for the irreducible normativity wager), and post 9 (for the naturalist moral realism wager).
Moral uncertainty implies metaethical uncertainty or confident moral anti-realism
Not everything people call “moral realism” comes with action-relevant implications for effective altruists. If we consider how one could attain justified confidence in moral realism “worthy of the name,” we find – so I argue – that such confidence appears incompatible with remaining morally uncertain. Another way to say this is that the main route to moral realism is developing confidence in some object-level ethical theory.
For further discussion, see post 6.
Mistaken metaethics can inspire poorly-grounded moral views
Peter Singer wrote an essay in 1973 on why metaethics perhaps doesn’t deserve too much attention. In many ways, I still agree with this. However, some people may believe that metaethical disagreements are entirely unimportant. I disagree because there are situations where someone’s metaethical beliefs can directly cause them to reason about morality in ways they wouldn’t otherwise endorse. For example, (false) belief in moral realism can induce more philosophical “bullet biting” than someone may otherwise be prone to accept, and it can make someone stay “passive” in their moral reasoning, which could prevent them from forming convictions about what to value.
For further discussion, see posts 7 and 9.
Peer disagreement on moral questions isn’t always a cause of concern
If moral realism is true, you probably want to update towards the moral views of philosophically knowledgeable people and good/sophisticated reasoners. On the other hand, if moral anti-realism is true, you want to update to people who are all of the above and share your most fundamental intuition(s) about how to reason or what to value. By “your most fundamental intuition(s),” I mean things you are confident you’d never give up unless you absolutely had to. (Note that some people may not have “most fundamental intuitions” in that sense; finding a proposition “very counterintuitive,” for instance, is too weak to qualify.)
For further discussion, see posts 8 and 9.
“Moral uncertainty” is important but quite the rabbit hole
There’s a place for moral uncertainty under moral anti-realism. The critical question is, “What’s the object of our uncertainty? What is it that we can get right or wrong?” One answer is that someone can be right or wrong about their “idealized values” – what they’d value if they were “perfectly wise and informed.” However, going into the details of “perfectly wise and informed,” that is, when we specify hypothetical procedures for our moral reflection, we see that moral reflection is more an art than a science. Without a “true morality,” there’s no obvious endpoint to conclude one’s reflection. There are pitfalls to avoid and many judgment calls to make. In particular, one has to balance the possibility of staying too passive and ending up with under-defined values vs. the possibility of forming convictions too early without understanding enough of the moral option space.
For further discussion, see post 9.
Which posts are the most interesting for you?
Post Summaries
Here are some quick summaries of the posts, in order:
(1) What Is Moral Realism?
(2) Why Realists and Anti-Realists Disagree
(3) Against Irreducible Normativity
(4) Why the Irreducible Normativity Wager (Mostly) Fails
(5) Metaethical Fanaticism (Dialogue)
(6) Moral Uncertainty and Moral Realism Are in Tension
(7) Dismantling Hedonism-inspired Moral Realism
(8) The Life-Goals Framework: How I Reason About Morality as an Anti-Realist
(9) The “Moral Uncertainty” Rabbit Hole, Fully Excavated