I'm beginning this post with the assumption that there is no "moral reality," namely that there is no objective truth about what we should or should not do.[1]

Instead, I will (I hope non-controversially) consider morality to be some set of emotional intuitions possessed by any given individual. These moral intuitions, like any other human quality, were formed through a combination of evolutionary benefit and cultural context. And, as a notable characteristic, these intuitions do not form a particularly self-consistent system. A common example used here is the failure to scale: the resources we would expend to do some moral good X three times is not necessarily thrice the amount we would expend to do X once. 

Let us consider a prototypical rationalist, B, who dislikes this inconsistency and instead wants to create a set of general moral principles[2] which acts as a consistent formalization of his moral instincts. These principles may be rules of action, a metric for determining the goodness of ends, or really any systematization which deals in abstract moral terms. Getting started, he thinks to himself, "While I do not emotionally care 1000 times more that X happens one billion times than that X happens one million times, I ought to act as though I do so that my behavior will be logically consistent." 

Of course, since our moral instincts are inconsistent in many ways, anyone attempting to create a consistent system of formal moral principles must make some compromises. Suppose B likes pure utilitarianism as his formal moral system, but he instinctively despises the idea of a utility monster. B decides he has two options. First, he could stick with pure utilitarianism and concede that he is obligated by his moral system to satiate the utility monster. Or, second, he could abandon pure utilitarianism and augment his moral system so that he is no longer required to satiate the monster. 

B eventually arrives on a consistent system of general principles that he feels produces very similar answers to his moral intuition in most cases. B resolves to abide by this system, even when it produces an answer that clashes with his moral intuition. Over time, he may even be able to develop a strong enough emotional attachment to his moral system that he really feels that the utility monster deserves to be satiated, even when the idea was revolting to him at first glance. 

I notice that a lot of people, if they have any interest in morality, seem to follow similar paths to B with varying degrees of success. Within the rationalist community, I imagine there is a higher degree of adhering to moral principles even in situations where moral intuition may clash with it. Of course, most people haven't 100% completed this journey and allow for the fact that their moral rules might yet be tweaked a little to account for new ideas and new edge cases. But the arc is similar. 

B, and many like B, probably didn't even notice that perceiving the failure to scale as an inconsistency was a choice. But in the absence of moral realism, there's no reason to care that our moral intuitions don't scale correctly unless we have an intuition that they should (or, at least, an intuition that the problems caused by this failure to scale are actual problems). Moral intuitions are like any other emotional preference, and the standards to which you hold them can only be emotional preferences themselves. 

The moral system "I will always act as I intuitively feel I should act in any particular situation" is not inherently inconsistent. But as soon as you add in "I intuitively feel that the actions I produce with my moral intuition should never add up to actions I would not have produced via intuition," you've created a problem. It's not that the decision to scale "incorrectly" is wrong in and of itself. It is the additional preference to not act that way which makes it an inconsistent system. I could spend $5 to do X one hundred times individually and refuse to spend $500 to do one hundred X all at once and still be adhering nicely to a satisfying moral system so long as I only care about the particular scenarios and do not care about the sum of the results of my actions. Each choice I made was correct, and if the sum of the results is irrelevant than I have no problems. I absolutely concede that I would be open to a huge amount of manipulation of these moral intuitions, but that's not really the point. That would only be a problem if I had a preference to avoid those kinds of manipulations. 

While the desire to be consistent in the sum of the results of your actions may seem quite obviously nearly universal, my point is that it is still just a moral intuition. All we are doing when we call the failure to scale a "fallacy" and augment our moral systems to avoid it, is prioritizing that moral intuition over our other moral intuitions which may deviate from it.

Now, with this subjectivity in mind, I want to consider a thinker C. Like our friend B, C found herself deeply concerned with the failure to scale. Like B, C decided she had a strong moral intuition for mathematical consistency, so she resolved to augment her moral system to account for this. Regardless of the situation, C will decide what her moral intuition states in a base case and then sum or multiply as needed to arrive at her final conclusion. 

But instead of setting out to create a self-contained general system of moral principles, C decides to keep her intuitive morality and deal primarily in particulars. She makes a practice of checking the conclusions of her intuitions and finding conclusions that she intuits are problematic. She then assesses whether or not she feels worse about the unintuitive conclusion or about the intuitions that led her there. C adds a general principle to augment her moral system if and only if she intuitively cares about consistency with the principle more than the sum of every intuitive deviation from the principle. 

For example, C reads Feeling Moral and decides that since she has a very strong intuition that saving lives is good and only a weak intuition that gambling with lives is bad, she will create a principle for herself that, in order to save lives, gambling with life is no issue (and thus should not have any influence on her calculations).

Now imagine B and C are discussing ethical dilemmas. B and C have agreed that in the classic trolley problem they would both pull the lever. 

They move on to consider The Fat Man variant of the trolley problem. 

B: I would push the man. I consider those four additional lives saved more important than the guilt I might feel.

C: I wouldn't push him. 

B: Don't tell me you're going to try to rationalize some actual difference between those two situations. The equation is still the same, just with the addition of more guilt you might have to feel. 

C: I have a strong moral intuition not to push this man to his death. I do also have a strong moral intuition to save lives, but having multiplied that life-saving intuition by four, it is still less than my moral intuition to not push a man to his death.[3] I would push him to save five total lives, but no less, because that is how my math worked out. 

B: But you would push the lever which is essentially the same thing. You've already admitted that, in the case of pushing the lever, the benefit of saving four lives outweighed the cost of knowingly causing the death of one person. Unless you genuinely feel that your additional guilt would be so bad that it is worse than four people dying, you ought to push the man or else you're being inconsistent and deciding based on your own feelings, not logic.

C: You're right that I'm deciding based on my own feelings. In this case, since we're discussing the feelings about moral questions and how I would act based on them, those feelings are synonymous with moral intuitions. I have a moral intuition not to pull a lever that allows a train to crush someone. But I also have a moral intuition to save lives. The trolley problem pits these intuitions against each other. I did the math, and the intuition not to pull the lever lost out, so I would pull the lever. However, my intuition not to push the man is stronger than my intuition to not pull the lever, so it managed to win against four times my intuition to save a life. 

B: Those intuitions are arbitrary and malleable, though. They're just feelings. There's no guarantee of consistency and no accountability. You might wake up tomorrow caring less about the man and decide you'd push him to save just three lives.

C: Then I would push him tomorrow. But I won't today.

B: Doesn't that arbitrary inconsistency bother you? You have no system, so you're less predictable and have to deal in particulars instead of abstractions! Instead of having a general principle about when to sacrifice one life for others that you commit to abiding by, you're just sort of going based on whatever you feel like. 

C: Yes, and that bothers me a bit. But it doesn't bother me nearly as much as it would bother me to augment my morality to account for it. 

It is a personal choice whether you, like B, prefer to have an entirely consistent set of general moral principles which only have distant origins in moral intuition, or, like C, you prefer to simply refine your particular intuitions directly. And by "personal choice," I mean that this decision depends on how strongly you hold a moral intuition to make yourself a set of principles and whether that intuition wins out over the morals you would have to sacrifice for the sake of those principles.

If you do that calculation, and decide your desire for generalized and systematized morality wins, then I, of course, have no issue with your decision. However, I have noticed that many people like B tend to slip almost into thinking that they are somehow more rational people for creating a general self-consistent system of moral principles that overrides their emotional preferences. Somewhere in this process of creating such a moral system, they forget that it was an arbitrary and emotional choice to create the system in the first place. The management of inconsistencies can be done in a number of ways, and in a world without moral truth, no method of management is inherently better than another. 

C is inconsistent by B's standards, which demand general principles. But she is perfectly consistent with her own standards, which do not. 

As for your own morality, whether you are more like B or more like C, I think it's a fun (and useful) little thought exercise to try to think like the other. If you're like C, how much would you really have to give up to have such a system? Plenty of systems have great options for compromise. Plus, it would be quite nice to have that much simplicity and predictability. If you're like B, how much do you really value the fact of having a system? Is it worth the compromises of your moral intuitions? Wouldn't it feel freeing to let go of it? 

And, if you just so happen to have been struggling to try to come up with a general system that doesn't compromise too many of your intuitions, perhaps this can offer some respite.

  1. ^

    I'm assuming for now that this is not a very unusual opinion. I could spend a while justifying it, but this can essentially boil down to the points that I have not yet seen any convincing justification for moral truth and have no reason to believe in such a thing. I welcome anyone who wants to point me towards their favorite justification for moral realism. 

  2. ^

    Not to be confused with rules/principles of action. This is not about deontology specifically. Both a classic deontologist and a classic consequentialist could play the role of B. 

  3. ^

    I acknowledge that, in reality, this is probably quite unintuitive. For most people who truly multiply their desire to save a life by four, I am imagining that would win out against most types of (no-legal-consequences) murder. But rather than spend time explaining some less familiar variant on the trolley problem, we will just assume that C has some slightly unusual intuitions. 

New Comment
2 comments, sorted by Click to highlight new comments since:

Every moral system works on central cases and falls apart at the tails (edge cases). It seems that in your dialog person C implicitly acknowledges this, while B is trying to accomplish the impossible task of constructing a moral system that both always self-consistent and non-repugnant at the edges.

[-]AGO10

B (at least B as I intended him) is trying to create consistent general principles that minimize that inevitable repugnancy. I definitely agree that it is entirely impossible to get rid of it, but some take the attitude of “then I’ll have to accept some repugnancy to have a consistent system” rather than “I shall abandon consistency and maintain my intuition in those repugnant edge cases.”

Perhaps I wasn’t clear, but that was at least the distinction I intended to convey.