Asking the Question
Until very recently, I was a hedonic utilitarian. That is, I held ‘happiness is good’ as an axiom – blurring the definition a little by pretending that good emotions other than strict happiness still counted because it made people “happy” to have them -- and built up my moral philosophy from there. There were a few problems I couldn’t quite figure out, but by and large, it worked: it produced answers that felt right, and it was the most logically consistent moral system I could find.
But then I read Three Worlds Collide.
The ending didn’t fit within my moral model: it was a scenario in which making people happy seemed wrong. Which raised the question: What’s so great about happiness? If people don’t want happiness, how can you call it good to force it on them? After all, happiness is just a pattern of neural excitation in the brain; it can’t possibly be an intrinsic good, any more than the pattern that produces the thought “2+2=4”.
Well, people like being happy. Happiness is something they want. But it’s by no means all they want: people also want mystery, wonder, excitement, and many other things – and so those things are also good, quite independent of their relation to the specific emotion ‘happiness’. If they also desire occasional sadness and pain, who am I to say they’re wrong? It’s not moral to make people happy against their desires – it’s moral to give people what they want. (Voila, preference utilitarianism.)
But – that’s not a real answer, is it?
If axiom ‘happiness is good’ didn’t match my idea of morality, that meant I wasn’t really constructing my morality around it. Replacing that axiom with ‘preference fulfillment is good’ would make my logic match my feelings better, but it wouldn’t give me a reason to have those feelings in the first place. So I had to ask the next question: Why is preference fulfillment good? What makes it “good” to give other people what they want?
Why should we care about other people at all?
In other words, why be moral?
~
Human feelings are a product of our evolutionary pressures. Emotions, the things that make us human, are there because they caused the genes that promoted them to become more prevalent in the ancestral environment. That includes the emotions surrounding moral issues: the things that seem so obviously right or wrong seem that way because that feeling was adaptive, not because of any intrinsic quality.
This makes it impossible to trust any moral system based on gut reaction, as most people’s seem to be. Our feelings of right and wrong were engineered to maximize genetic replication, so why should we expect them to tap into objective realms of ‘right’ and ‘wrong’? And in fact, people’s moral judgments tend to be suspiciously biased towards their own interests, though proclaimed with the strength of true belief.
More damningly, such moralities are incapable of coming up with a correct answer. One person can proclaim, say, homosexuality to be objectively right or wrong everywhere for everyone, with no justification except how they feel about it, and in the same breath say that it would still be wrong if they felt the other way. Another person, who does feel the other way, can deny it with equal force. And there’s no conceivable way to decide who’s right.
I became a utilitarian because it seemed to resolve many of the problems associated with purely intuitive morality – it was internally consistent, it relied on a simple premise, and it could provide its practitioners a standard of judgment for moral quandaries.
But even utilitarianism is based on feeling. This is especially true for hedonic utilitarianism, but little less for preference – we call people getting what they want ‘good’ because it feels good. It lights up our mirror neurons, triggers the altruistic instincts encoded into us by evolution. But evolution’s purposes are not our own (we have no particular interest in our genes’ replication) and so it makes no sense to adopt evolution’s tools as our ultimate goals.
If you can’t derive a moral code from evolution, then you can’t derive it from emotion, the tool of evolution; if you can’t derive morality from emotion, then you can’t say that giving people what they want is objectively good because it feels good; if you can’t do that, you can’t be a utilitarian.
Emotions, of course, are not bad. Even knowing that love was designed to transmit genes, we still want love; we still find it worthwhile to pursue, even knowing that we were built to pursue it. But we can’t hold up love as something objectively good, something that everyone should pursue – we don’t condemn the asexual. In the same way, it’s perfectly reasonable to help other people because it makes you feel good (to pursue warm fuzzies for their own sake), but that emotional justification can’t be used as the basis for a claim that everyone should help other people.
~
So if we can’t rely on feeling to justify morality, why have it at all?
Well, the obvious alternative is that it’s practical. Societies populated by moral individuals – individuals who value the happiness of others – work better than those filled with selfish ones, because the individually selfless acts add up to greater utility for everyone. One only has to imagine a society populated by purely selfish individuals to see why pure selfishness wouldn’t work.
This is a facile answer. First, if this is the case, why would morality extend outside of our societies? Why should we want to save the Babyeater children?
But more importantly, how is it practical for you? There is no situation in which the best strategy is not being purely selfish. If reciprocal altruism makes you better off, then it’s selfishly beneficial to be reciprocally altruistic; if you value warm fuzzies, then it’s selfishly beneficial to get warm fuzzies; but by definition, true selflessness of the kind demanded by morality (like buying utilons with money that could be spent on fuzzies) decreases your utility – it loses. Even if you get a deep emotional reward from helping others, you’re strictly better off being selfish.
So if feelings of ‘right’ and ‘wrong’ don’t correspond to anything except what used to maximize inclusive genetic fitness, and having a moral code makes you indisputably worse off, why have one at all?
Once again: Why be moral?
~
The Inconsistency of Consequentialism
Forget all that for a second. Stop questioning whether morality is justified and start using your moral judgment again.
Consider a consequentialist student being tempted to cheat on a test. Getting a good grade is important to him, and he can only do that if he cheats; cheating will make him significantly happier. His school trusts its students, so he’s pretty sure he won’t get caught, and the test isn’t curved, so no one else will be hurt by him getting a good score. He decides to cheat, reasoning that it’s at least morally neutral, if not a moral imperative – after all, his cheating will increase the world’s utility.
Does this tell us cheating isn’t a problem? No. If cheating became widespread, there would be consequences – tighter test security measures, suspicion of test grades, distrust of students, et cetera. Cheating just this once won’t hurt anybody, but if cheating becomes expected, everyone is worse off.
But wait. If all the students are consequentialists, then they’ll all decide to cheat, following the same logic as the first. And the teachers, anticipating this (it’s an ethics class), will respond with draconian anti-cheating measures – leaving overall utility lower than if no one had been inclined to cheat at all.
Consequentialism called for each student to cheat because cheating would increase utility, but the fact that consequentialism called for each student to cheat decreased utility.
Imagine the opposite case: a class full of deontologists. Every student would be horrified at the idea of violating their duty for the sake of mere utility, and accordingly not a one of them would cheat. Counter-cheating methods would be completely unnecessary. Everyone would be better off.
In this situation, a deontologist class outcompetes a consequentialist one in consequentialist terms. The best way to maximize utility is to use a system of justification not based on maximizing utility. In such a situation, consequentialism calls for itself not to be believed. Consequentialism is inconsistent.
So what’s a rational agent to do?
The apparent contradiction in this case results from thinking about beliefs and actions as though they were separate. Arriving at a belief is an action in itself, one which can have effects on utility. One cannot, therefore, arrive at a belief about utility without considering the effects on utility that holding that belief would have. If arriving at the belief “actions are justified by their effect on utility” doesn’t maximize utility, then you shouldn’t arrive at that belief.
However, the ultimate goal of maximizing utility cannot be questioned. Utility, after all, is only a word for “what is wanted”, so no agent can want to do anything except maximize utility. Moral agents include others' utility as equal to their own, but their goal is still to maximize utility.
Therefore the rule which should be followed is not “take the actions which maximize utility”, but “arrive at the beliefs which maximize utility.”
But there is an additional complication: when we arrive at beliefs by logic alone, we are effectively deciding not only for ourselves, but for all other rational agents, since the answer which is logically correct for us must also be logically correct for each of them. In this case, the correct answer is the one which maximizes utility – so our logic must take into account the fact that every other computation will produce the same answer. Therefore we can expand the rule to “arrive at the beliefs which would maximize utility if all other rational agents were to arrive at them (upon performing the same computation).”
[To the best of my logical ability, this rule is recursive and therefore requires no further justification.]
This rule requires you to hold whatever beliefs will (conditional upon them being held) lead to the best results – even when the actions those beliefs produce don’t, in themselves, maximize utility. In the case of the cheating student, the optimal belief is “don’t cheat” because that belief being held by all the students (and the teacher simulating the students’ beliefs) produces the best results, even though cheating would still increase utility for each individual student. The applied morality becomes deontological, in the sense that actions are judged not by their effect on utility but by their adherence to the pre-set principle.
The upshot of this system is that you have to decide ahead of time whether an approach based on duty (that is, on every agent who considers the problem acting the way that would produce the best consequences if every agent who considers the problem were to act the same way) or on utility (individual computation of consequences) actually produces better consequences. And if you pick the deontological approach, you have to ‘forget’ your original goal – to commit to the rule even at the cost of actual consequences – because if it’s rational to pursue the original goal, then it won’t be achieved.
~
The Solution to Morality
Let’s return to the original question.
The primary effect of morality is that it causes individuals to value others’ utility as an end in itself, and therefore to sacrifice their own utility for others. It’s obvious that this is very good on a group scale: a society filled with selfless people, people who help others even when they don’t expect to receive personal benefit, is far better off than one filled with people who do not – in a Prisoner’s Dilemma writ large. To encourage that sort of cooperation (partially by design and partially by instinct), societies reward altruism and punish selflessness.
But why should you, personally, cooperate?
There are many, many times when you can do clearly better by selfishness than by altruism – by theft or deceit or just by not giving to charity. And why should we want to do otherwise? Our alruistic feelings are a mere artifact of evolution, like appendices and death, so why would we want to obey them?
Is there any reason, then, to be moral?
Yes.
Because that reasoning – that your own utility is maximized by selfishness – literally cannot be right. If it were right, then it would be the answer all rational beings would arrive at, and if all rational beings arrived at that answer, then none of them would cooperate and everyone would be worse off. If selfish utility maximizing is the correct answer for how to maximize selfish utility, selfish utility is not maximized. Therefore selfishness is the wrong answer. Each individual’s utility is maximized only if they deliberately discard selfish utility as the thing to be maximized. And the way to do that is for each one to adopt a duty to maximize total utility, not only their own – to be moral.
And having chosen collective maximization over individual competition – duty over utility – you can no longer even consider your own benefit to be your goal. If you do so, holding morality as a means to selfishness’s end, then everyone does so, and cooperation comes crashing down. You have to ‘forget’ the reason for having morality, and hold it because it's the right thing to do. You have to be moral even to the point of death.
Morality, then, is calculated blindness – a deliberate ignorance of our real ends, meant to achieve them more effectively. Selflessness for its own sake, for selfishness's sake.
[This post lays down only the basic theoretic underpinnings of Deontological Decision Theory morality. My next post will focus on the practical applications of DDT in the human realm, and explain how it solves various moral/game-theoretic quandaries.]
I agree (if not on 1.2 figure, then still on some 1+epsilon).
It's analogous to, say, prosecuting homosexuals. If some people feel bad emotions caused by others' homosexuality, this reason is weaker than disutility caused by the prosecution, and so sufficiently reflective bargaining between these reasons results in not prosecuting it (it's also much easier to adjust attitude towards homosexuality than one's sexual orientation, in the long run).
Here, we have moral intuitions that suggest adhering to moral principles and virtues, with disutility of overcoming them (in general, or just in high-stakes situations) bargaining against disutility of following them and thus making suboptimal decisions. Of these two, consequences ought to win out, as they can be much more severe (while the psychological disutility is bounded), and can't be systematically dissolved (while a culture of consequentialism could eventually make it psychologically easier to suppress non-consequentialist drives).
I think you mean "persecuting", although depending on what exactly you're talking about I suppose you could mean "prosecuting".