5 min read

1

Originally posted at Living Within Reason

Epistemic status: moderately certain, but open to being convinced otherwise

tl;dr: any ethical system that relies on ethical intuitions is just egoism that's given a veneer of objectivity.

Utilitarianism Relies on Moral Intuitions

Most rationalists are utilitarians, so much so that most rationalist writing assumes a utilitarian outlook. In a utilitarian system, whatever is "good" is what maximizes utility. Utility, technically, can be defined as anything, but most utilitarians attempt to maximize the well-being of humans and, to some extent, animals.

I am not a utilitarian. I am an egoist. I believe that the only moral duty that we have is to act in our own self-interest (though generally, it is in our self-interest to act in prosocial ways most of the time). I feel a certain alienation from a lot of rationalist writing because of this difference. However, I have long suspected that most utilitarian thinking is largely the same thing as egoism.

Recently, Ozy of Thing of Things wrote a post that illustrates this point well. Like a lot of rationalist writing, this is addressing an ethical dilemma from a utilitarian framework. Ozy is trying to decide what creatures have a right to life, specifically considering humanely-raised animals, human fetuses, and human babies. From the post:

Imagine that, among very wealthy people, there is a new fad for eating babies. Out baby farmer is an ethical person and he wants to make sure that his babies are farmed as ethically as possible. The babies are produced through artificial wombs; there are no adults who are particularly invested in the babies’ continued life. The babies are slaughtered at one month, well before they have long-term plans and preferences that are thwarted by death. In their one month of life, the babies have the happiest possible baby life: they are picked up immediately whenever they cry, they get lots of delicious milk, they’re held and rocked and sung to, their medical concerns are treated quickly, and they don’t ever have to sit in a poopy diaper. In every way, they live as happy and flourishing a life as a two-week-old baby can. Is the baby farm unethical?
If you’re like me, the answer is a quick “yes.”

Ozy's main evidence for their conclusion is specifically stated to be their moral intuition, resting on the idea that "I am horrified by the idea of a baby farm. I am not horrified by the idea of a beef cow farm." Ozy goes on to examine this intuition, weighs it against other moral intuitions, and ultimately concludes that it is correct.

This is not surprising given that the ultimate authority for any consequentialist system is the individual's moral intuitions (see Part 1). In a utilitarian system, moral intuitions "are the only reason you believe morality exists at all. They are also the standards by which you judge all moral philosophies." People have many different moral intuitions, and must weigh them against one another when it comes to difficult ethical questions, but at bedrock, moral intuitions are the basis for the entire ethical system.

Moral Intuitions Are Subjective Preferences

From the previously-linked FAQ:

Moral intuitions are people's basic ideas about morality. Some of them are hard-coded into the design of the human brain. Others are learned at a young age. They manifest as beliefs (“Hurting another person is wrong"), emotions (such as feeling sad whenever I see an innocent person get hurt) and actions (such as trying to avoid hurting another person.)

Notice that nothing in this explanation appeals to anything objective. Arguably, "hard-coded into the design of the human brain" could be seen an objective, but it is also trivial. If I do not share a specific intuition, then tautologically it is not hard-coded into my brain so it cannot be used to resolve a difference of opinion.

Under a egoist worldview, there are still ethics, but they are based on self-interest. What is "good" is merely what I prefer. Human flourishing is good because the idea of human flourishing makes me smile. Kicking puppies is bad because it upsets me. These are not moral rules that can bind anyone else. They are merely my preferences, and to the extent that I want others to conform to my preferences, I must convince or coerce them.

The egoist outlook is entirely consistent with the utilitarian one. Consider the above paragraph, but rewritten to emphasize the subjectivity:

[My] moral intuitions are [my preferences for how the world should be]. Some of them are hard-coded into the design of [my] brain. Others are learned at a young age. They manifest as beliefs (“Hurting another person is wrong"), emotions (such as feeling sad whenever I see an innocent person get hurt) and actions (such as trying to avoid hurting another person.)

The language is changed, but the basic idea is the same. It emphasizes that my moral rules are based entirely on what appeals to me. At its heart, any system that relies on moral intuitions is indistinguishable from egoism.

Why Does This Matter?

In a sense, my conclusion here is rather trivial. Who cares if utilitarian ethics and egoism are largely the same thing? As an egoist, shouldn't I be happy about this and encourage more people to be utilitarians?

The reason why I would prefer that more people explicitly acknowledge the egoist foundations of their moral theory is that I believe moral judgment of others does great harm to our society. Utilitarianism dresses itself up as objective, and therefore leaves room to decide that other people have moral obligations, and that we are free (or even obligated) to judge and/or punish them for their moral failings.

Moral judgment of others makes us unlikely to accept that nobody deserves to suffer. If someone behaves immorally, we often feel that it is "justice" to punish that person regardless of the practical effects of the punishments. It leads to outrage culture and is a major impediment to adopting an evidence-based criminal justice system.

If we’re insisting on punishing someone for reasons other than trying to influence (their or others’) future behavior, we are not making the world a better place. We are just being cruel. Nobody deserves to suffer. Even the worse people in the world are just acting according to their brain wiring. By all means, we should punish bad behavior, but we should do it in a way that’s calculated to influence future behavior.  We should recognize that, if we truly lived in a just world, everyone, even the worst of us, would have everything they want.

If, instead, we acknowledge that our moral beliefs are merely preferences for how we would like the world to work, we will inflict less useless suffering. If we acknowledge that attempting to force our morality on someone else is inherently coercive, we will use it only in circumstances where we feel that coercion is justified. We will stop punishing people based on the idea of retribution and can instead adopt an evidence-based system that only punishes people if the punishments are reasonably likely to create better future outcomes.

I have a preference for less suffering in the world. If you share that preference, consider adopting an explicitly egoist morality and encouraging others with similar preferences to do the same. We will never tame our most barbaric impulses unless we abandon the idea that we are able to morally judge others.

New Comment
12 comments, sorted by Click to highlight new comments since:

This strongly resembles the argument given by Subhan in EY's post Is Morality Preference?, with a side order of Fake Selfishness. You might enjoy reading those posts along with others in their respective sequences. ("Is Morality Preference?" was part of the original metaethics sequence but didn't make the cut for Rationality: AI to Zombies.)

More to the point, the biggest mistake I see here is the one addressed in The Domain of Your Utility Function: yes, my moral preferences are a part of my map rather than the territory, but there's still a damn meaningful difference between egoism (preferences that point only to the part of my map labeled "my future experiences") and my actual moral preferences, which point to many other parts of the map as well.

I'm sceptical that pushing egoism over utilitarianism will make people less prone to punish others.

I don't know any system of utilitarianism that places terminal value on punishing others, and (although there probably exists a few,) I don't know of anyone who identifies as a utilitarian who places terminal value on punishing others. In fact, I'd guess that the average person identifying as a utilitarian is less likely to punish others (when there is no instrumental value to be had) than the average person identifying as an egoist. After all, the egoist has no reason to tame their barbaric impulses: if they want to punish someone, then it's correct to punish that person.

I agree that your version of egoism is similar to most rationalists' versions of utilitarianism (although there are definitely moral realist utilitarians out there). Insofar as we have time to explain our beliefs properly, the name we use for them (hopefully) doesn't matter much, so we can call it either egoism or utilitarianism. When we don't have time to explain our beliefs properly, though, the name does matter, because the listener will use their own interpretation of it. Since I think that the average interpretation of utilitarianism is less likely to lead to punishment than the average interpretation of egoism, this doesn't seem like a good reason to push for egoism.

Maybe pushing for moral anti-realism would be a better bet?

I often point out that I'm a consequentialist, but not a utilitarian. I differ from you in that I don't think of egoism as my source of truth for moral questions, but aesthetics. My intuitions and preferences are about how I like to see the world, and I do believe they generalize (not perfectly) to how others can cooperate and function together better. As part of this, I do have indexical preferences, which is compatible with some amount of egoism.

That said, most of my preferences are satisfied if a bunch of non-me entities are utilitarian, and many (most, even) of my actions are the same as they would be under a utilitarian framework, so I'm happy to support utilitarianism over the uglier forms of egoism that I fear would take over.

[-][anonymous]30

I agree with your conclusion, but feel like there's some nuance lacking. In three ways.

1.

It seems that indeed a lot of our moral reasoning is confused because we fall for some kind of moral essentialism, some idea that there is an objective morality that is more than just a cultural contract that was invented and refined by humans over the course of time.

But then you reinstall this essentialism into our "preferences", which you hold to be grounded in your feelings:

Human flourishing is good because the idea of human flourishing makes me smile. Kicking puppies is bad because it upsets me.

We recursively justify our values, and this recursion doesn't end at the boundary between consciousness and subconsciousness. Your feelings might appear to be your basic units of value, but they're not. This is obvious if you consider that our observation about the world often change our feelings.

Where does this chain of justifications end? I don't know, but I'm reasonably sure about two things:

1) The bedrock of our values are probably the same for any human being, and any difference between conscious values is either due to having seen different data, but more likely due to different people situationally benefitting more under different moralities. For example a strong person will have "values" that are more accepting of competition, but that will change once they become weaker.

2) While a confused ethicist is wrong to be looking for a "true" (normative) morality, this is still better than not searching at all because you hold your conscious values to be basic. The best of both worlds is an ethicist that doesn't believe in normative morality, but still knows there is something to be learned about the source of our values.

2.

Considering our evolutionary origins, it seems very unlikely to me that we are completely selfish. It seems a lot more likely to me that the source of our values is some proxy of the survival and spread of our genes.

You're not the only one who carries your genes, and so your "selfish" preferences might not be completely selfish after all

3.

We're a mashup of various subagents that want different things. I'd be surprised if they all had the same moral systems. Part of you might be reflective, aware of the valence of your experience, and actively (and selfishly) trying to increase it. Part of you will reflect your preferences for things that are very not-selfish. Other parts of you will just be naive deontologists.

1) The bedrock of our values are probably the same for any human being, and any difference between conscious values is either due to having seen different data, but more likely due to different people situationally benefitting more under different moralities. For example a strong person will have "values" that are more accepting of competition, but that will change once they become weaker.

I continue to find minimization of confusion while maintaining homeostasis around biologically determined set points a reasonable explanation for the bedrock of our values. Hopefully these ideas will coalesce well enough in me soon to be able to write something more about this than that headline.

I do not fully understand the point you are making in (1). I don't see anything specifically to disagree with, but also don't see how it's in conflict with anything in the OP. I hold that my feelings are my basic unit of value because that's what I care about. If a different person cares about different things, that's their decision. My feelings are in constant flux, and will often change. Is that somehow in conflict with something I've said? My thoughts on egoism are more fully fleshed out in the linked post.

I'm mostly ignoring (2) because it will get me off on a tangent about evopsych, and that's not the discussion I want to get into at the moment. Suffice it to say that I think when I admit that the idea of human flourishing makes me smile, I am admitting to not being completely selfish.

On (3), I again don't have much disagreement. I'm not advocating for selfishness in the sense of not caring about anyone else. I'm just asking us to recognize that our preferences are subjective and not binding on anyone else. Those preferences are obviously complicated and sometimes self-contradictory. Egoism is not Objectivism.

[-][anonymous]20

Still I think this line of thinking is extremely important because it means that people won't agree with any proposal for a morality that isn't useful for them, and keeping this in mind makes it a lot easier to propose moralities that will actually be adopted.

[-]TAG10

I'm glad enough that murderers are punished and dissuaded. Since a lot of people share my preference for not being murdered, throwing negative utility at murderers generates nett positive utility.

The same trick doesn't work for preferences that aren't shared. I am not going to create utility by enforcing my fondness for the colour pink and the number nine on people.

But that's widely understood. (Lesswrongians are about the only people who treat any kind of value as moral relevant ). So what's the problem?

I suppose the problem is that there isn't a clean break between widely shared values and idiosyncratic personal values. You can have an uncanny valley situation where about half the population share a value and are trying to impose it on the rest

[-]TAG10

Egoism, in general, is very easy to pull apart from utilitarianism: where performing an act will decrease utility for others, the egoist will perform it, but the utilitarian will refrain. In general, egoists are selfish.

If it happens to be the case that the egoist has a preference for altruism, then that egoist...the altruistic egoist.. will refrain. But altruistic egoism is pretty non-central... in fact, somewhat paradoxical.

I think you have an answer to this objection, along the lines that if someone who is basically altruistic becomes an egoist, then it is a better outcome. Whilst I don't deny the role of intuitions, I don't think they are the whole story either. Rational persuasion plays a role, as is tacitly admitted by using rational persuasion...

Someone could be persuaded that utilitarianism is some kind of mathematic truth, thereby nudging them towards altruism, or be persuaded that egoism is a logical truth, thereby nudging them away.

While I agree with your philosophical claims, I am dubious about whether treating moral beliefs as "mere preferences" would produce positive outcomes. This is because:

1. Psychologically, I think treating moral beliefs like other preferences will increase people's willingness to value-drift or even to use value-drifting for personal reasons

2. Regardless of whether we view moral preferences as egoism, we will treat people's moral preferences differently than we treat others because this preserves cooperation norms

3. These cooperation norms set up a system that makes unnecessary punishment likely regardless of whether we treat morality as egoist. Furthermore, egoist morality might make excessive punishment worse by increasing the number of value-drifters trying to exploit social signalling.

In short, I think egoist perspectives on morality could cause one really big problem (value-drift) without changing the social incentives around excessive punishment (insurance against defection caused by value-drift).

In long, here's a really lengthy write-up of my thought process of why I'm suspicious about egoism:

1. It sounds like something that would encourage deliberately inducing value drift

While I agree that utilitarianism (and morality in general) can often be described as a form of egoism, I think that reframing our moral systems as egoist (in the sense of "I care about this because it appeals to me personally") dramatically raises the likelihood of value-drift. We can often reduce the strengths of common preferences through psychological exercises or simply getting used to different situations. If we treat morals as less sacred and more like things that appeal to us, we are more likely to address high-cost moral preferences (i.e. caring about animal welfare enough to become vegetarian or to feel alienated by meat-eaters) by simply deciding the preference causes too much personal suffering and getting rid of it (i.e. by just deciding to be okay with the routine suffering of animals).

Furthermore, from personal experimentation, I can tell you that the above strategy does in fact work and individuals can use it to raise their level of happiness. I've also discussed this on my blog where I talk about why I haven't internalized moral anti-realism.

2. On a meta-level, we should aggressively punish anyone who deliberately induces value-drift

I try to avoid such strategies now but only because of a meta-level belief that deliberately causing value-drift to improve your emotional well-being is morally wrong (or at least wrong in a more sacred way than most preferences). Naively, I could accept the egoist interpretation that not deliberately inducing value-drift is a preference for how I want the world to work and throw it away to (which would be easy because meta-level preferences are more divorced from intuition than object level ones). However, this meta-level belief about not inducing value-drift has some important properties:

1. don't modify your moral preferences because it would personally benefit you is a really important and probably universal rule in the Kantian sense. The extent to which this rule is believed is (so long as competition exists) is the extent to which whatever group just got into power is willing to exploit everyone else. In other words, it puts an upperbound on the extent to which people are willing to defect against other people.

2. With the exception of public figures, it is very hard to measure whether someone is selfishly inducing value-drift or simply changing their minds.

From 1., we find that we have extreme societal value in treatiing moral preferences as different from ordinary preferences. From 2., we see that enforcing this is really hard. This means that we probably want to approach this issue using The First Offender model and commit to caring a disproportionate amount about stopping people from deliberately allowing value-drift. Practically, we see this commitment emerge when people accuse public figures of hypocrisy. We can also see that apathy towards hypocrisy claims is often couched in the idea that "all public figures are rotten"--an expression of mutual defection.

3. Punishing people with different moral beliefs makes value-drift harder

Because we want to aggressively dissuade value-drift, aggressive punishment is useful not solely to disincentivize a specific moral failing but also to force the punisher to commit to a specific moral belief. This is because:

*People don't generally like punishing others so punishing someone is a signal that you really care a lot about the thing they did.

*People tend to like tit-for-tat "do unto others as others do unto you" strategies and the action of punishing someone for an action makes you vulnerable to being similarly punished. This is mostly relevant on local and social levels of interaction rather than ones with bigger power gradients.

I personally don't really like these strategies since they probably cause more suffering than the value they provide in navigating cooperation norms. Moreover, the only reason people would benefit from being punishers in this context is if commiting to a particular belief will raise their status. This disproportionately favors people with consensus beliefs but, more problematically, it also favors people who don't have strong beliefs but could socially benefit from fostering them (i.e. value-drifters) so long as they never change again. Consequently, I think that mainstreaming egoist attitudes about morality would promote value-drifting in a way that makes mechanisms through which value-drift is prevented perform worse.

Conclusion

The above is a lot but, for me, it's enough to be very hesitant to the idea of being explicitly egoist. I think egoism could cause a lot of problems with moral value-drift. I also think that the issue of excessive punishment due to moral judgement has deeper drivers than whether humanity describes morality as egoist. I think a better solution would probably just to directly emphasize "no one deserves to suffer" attitudes in morality directly.

It's interesting that you approach it from the "bad effects of treating moral beliefs as X" rather than "moral beliefs are not X". Are you saying this is true but harmful, or that it's untrue (or something else, like neither are true and this is a worse equilibrium)?

I do not understand the argument about value drift, when applied to divergent starting values.

As someone leaning moral anti-realist, I learn roughly towards saying that it's true but harmful. To be more nuanced, I don't think its necessarily harmful to believe that moral preferences are like ordinary preferences but I do think that treating moral preferences like other preferences is a worse equilibrium.

If we could accept egoism without treating our own morals differently, then I wouldn't have a problem with it. However, I think that a lot of the bad (more defection) parts of how we judge other people's morals are intertwined with good (less defection) ways that we treat our own.

Can you elaborate on what you don't understand about value drift applied to divergent starting values?