I have a lot of fun analyzing ethical frameworks, and for the longest time I was a utilitarian because it was the framework that made the most sense to me. But there are obvious problems with it that have been debated on for quite a while, and I wanted to write up my solution for it.

I believe that this theory of morality is a strict superset of utilitarianism. It is not able to solve all of utilitarianism's flaws, but it should be able to solve some while also not losing any benefits.

Most likely plenty of other people have already invented this exact theory in the past, but I can’t find anything about it in my research, so I'm presenting it as new even though it likely isn’t. I don’t think this is strictly Buddhism, since that seeks the elimination of desires. This framework just uses the concept of them to ground morality. Please inform me of prior work around this subject and of the correct name for this theory! I would be very happy to find out it already exists.

 

Details

The core idea is that everyone has a set of "desires". The mental structures I'm gesturing to with this word can probably be named using a lot of other different terms,  but I'm sticking with desires in this document for consistency and their prior use. There's the basic ones like desire for oxygen, desire for not being in pain, desire to not be hungry, etc.

There's also the more complex ones exclusive to humans and higher creatures: desire to be involved and appreciated by a community/group, desire to accomplish difficult tasks and develop skills, desire to learn new information, etc.

"How can this not just be simplified to utility?" Because the desires are incomparable. If it was just a matter of fulfilling desires giving you utility, a sufficiently not hungry person could be happy while being deprived of all social interaction. But since no amount of food can compensate for a lack of companionship[1], or visa versa, then each desire must be incomparable and non reducible to any other.

“Is it not just preference utilitarianism?” Kind of, but there are a few major differences. Besides the fact that we are not using utility as a measurement at all, what preferences are is usually just glossed over as “we ask people” in utilitarianism. But desires are a deeper understanding of what constitutes preferences. While you still have to ask people, desires are acknowledged to not always be in the realm of conscious awareness, and may even be the opposite of what people state.

A core principle around this framework is that desires are separated from sensations. You can verify this separation is possible since the feeling of being hungry can be separated from the desire to eat. This type of separation is common where your body is not hungry but you still have the desire to eat (almost as if your tongue is bored). Or when your body is hungry but you don’t have the desire to eat (you’re distracted by a task) Thus, ethics should be around desires, not sensations, since sensations alone are not important to a consciousness, as seen in the above examples. After all, if we didn’t have the desire to avoid pain, then pain wouldn’t be an ethical concern for humans.

 

Explanatory power

The strength of each desire differs from person to person. Again, obvious claim here. Some people are more goal driven, or more pain avoidant, etc, then others. Possibly even the number of desires differs between people. Some people may not have some, others may have extras.

Evidence for this is backed by obvious personal data. Everyone knows they have desires, and what you desire isn’t always shared by others. But this is further supported by Buddhist texts and teachings, with claims such that “All suffering is caused by desire” being one of the 4 noble truths. 

This explains certain unexpected behaviors around pain, including masochism. Different pain tolerances can be explained by the desire to avoid pain being stronger or weaker (or even negative) in certain people, so they feel the same pain but have less of a desire to avoid it. Pain asymbolia, for example, lets you feel the bodily sensation of pain but not the desire of avoiding it/the suffering associated with it. This does not make sense from a utility standpoint, so can only be explained by this theory.

This explains why wire heading and drug addiction are seen as being bad by most people. Both of them only satisfy a single desire, that of experiencing bodily pleasure. It ignores all other desires, and in fact makes it more difficult to satisfy other desires. It also explains why only some people get addicted to drugs[2]. The strength of their desire for this specific kind of pleasure is strong enough to beat out the strength of their other desires.

 

Applications

This brings in the basic advantages of preference utilitarianism: that people have different things they consider good and want for themselves. Measuring and calculating utility always needed to take this into account to be useful. Additionally, people can simultaneously hold contradictory desires, which under utilitarianism is an unsolvable issue. If someone gains utility from eating a candy bar, but also gains utility from not being fat, raw utilitarianism is stuck. From a desire standpoint, we can see that the optimal outcome is to fulfill both desires simultaneously, which opens up a large frontier of possible solutions. 

The mentality of desires means that this new ethical system has 2 dimensions to optimize: number of desires satisfied, and the strength of each desire satisfied. This makes the optimal solution for an ethical dilemma more difficult to find, but a wider search space makes it possible to find solutions previously not discoverable. Specifically, this does not solve the issue of Torture vs Dust Specs or the repugnant conclusion, but does provide a step towards solving it by giving more data to analyze and complexity to bear.

This also assists with identifying what is a moral agent, and what can suffer. Things that don’t have desires, such as inanimate objects, don’t have any mental capacity for desiring, and thus can be safely ignored in moral considerations. Things that may or may not have desires have a much easier way of identifying if they are moral agents. Identifying if something has desires or not is much more straightforward then identifying if they are conscious.

 

Issues

How are desires built up and developed? Some are non-innate, some change naturally with age, some are cultural. The creation and destruction of desires is beyond this ethical framework as-is, but is a good area for further research. Buddhism is all about changing one's desires, so further research into that field is likely productive.

Is this system overly complicated and unable to be used in the real world? Maybe utilitarianism is a simplified version of this system that can actually be applied.

Additionally, there are likely unknown issues that I haven't identified with this system yet, but part of the reason I'm posting this is the hope that others can spot them!

  1. ^

    For any extended periods of time, anyway. Food can temporarily help ignore it, but long term isolation (on, say, a desert island) will cause psychological trauma even if you never go hungry.

  2. ^

    Approximately "Eighty to 90 percent of people who use crack and methamphetamine don’t get addicted", https://web.archive.org/web/20231012102800/https://www.nytimes.com/2013/09/17/science/the-rational-choices-of-crack-addicts.html

New Comment
4 comments, sorted by Click to highlight new comments since:

My impression is, what you propose to supersede Utilitarianism with, is rather naturally already encompassed by utilitarianism. For example, when you write

If someone gains utility from eating a candy bar, but also gains utility from not being fat, raw utilitarianism is stuck. From a desire standpoint, we can see that the optimal outcome is to fulfill both desires simultaneously, which opens up a large frontier of possible solutions.

I disagree that typical concepts of utilitarianism - not strawmans thereof - are in anyway "stuck" here at all: "Of course," a classical utilitarian might well tell you, "we'll have to trade-off between the candy bar and the fatness it provides, that is exactly what utilitarianism is about". And you can extend that to also other nuances you bring: whatever, ultimately, we desire or prefer or what-have-you most: As classical utilitarians we'd aim exactly at that, quasi by definition.

Good point, i may have overstated the issue that utilitarianism runs into with this problem. They're definitely not stuck, they can achieve a trade off or even a 3rd option that satisfied both outcomes. But i think the more salient issue with it in this case is that utilitarianism is not structured to allow optimal problem solving of these issues. It simplifies cause and effect both into the same metric of utility, which obfuscates the conflict, whereas this system places the conflict between the contrasting desires front and center.

From a utilitarian standpoint this specific problem is a matter of +X utility short term vs +Y utility long term, and once you solve for X and Y you chose the larger number. But that may not be the optimal solution if you can instead eat a non-fat candy bar and solve both problems at once. Both systems can achieve the same outcome, but in utilitarianism it's harder to see the path.

Thank you! I knew there had to have been similar ideas like this previously discussed, glad to see this! It even agrees with my conclusion on how it solves wireheading, and brings up a problem called "desire fulfillment act utilitarianism" that is a problem utilitarianism has that carries over to this.