Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Followup to: The Bedrock of Fairness
Every time I wonder if I really need to do so much prep work to explain an idea, I manage to forget some minor thing and a dozen people promptly post objections.
In this case, I seem to have forgotten to cover the topic of how morality applies to more than one person at a time.
Stop laughing, it's not quite as dumb an oversight as it sounds. Sort of like how some people argue that macroeconomics should be constructed from microeconomics, I tend to see interpersonal morality as constructed from personal morality. (And definitely not the other way around!)
In "The Bedrock of Fairness" I offered a situation where three people discover a pie, and one of them insists that they want half. This is actually toned down from an older dialogue where five people discover a pie, and one of them—regardless of any argument offered—insists that they want the whole pie.
Let's consider the latter situation: Dennis wants the whole pie. Not only that, Dennis says that it is "fair" for him to get the whole pie, and that the "right" way to resolve this group disagreement is for him to get the whole pie; and he goes on saying this no matter what arguments are offered him.
This group is not going to agree, no matter what. But I would, nonetheless, say that the right thing to do, the fair thing to do, is to give Dennis one-fifth of the pie—the other four combining to hold him off by force, if necessary, if he tries to take more.
A terminological note:
In this series of posts I have been using "morality" to mean something more like "the sum of all values and valuation rules", not just "values that apply to interactions between people".
The ordinary usage would have that jumping on a trampoline is not "morality", it is just some selfish fun. On the other hand, giving someone else a turn to jump on the trampoline, is more akin to "morality" in common usage; and if you say "Everyone should take turns!" that's definitely "morality".
But the thing-I-want-to-talk-about includes the Fun Theory of a single person jumping on a trampoline.
Think of what a disaster it would be if all fun were removed from human civilization! So I consider it quite right to jump on a trampoline. Even if one would not say, in ordinary conversation, "I am jumping on that trampoline because I have a moral obligation to do so." (Indeed, that sounds rather dull, and not at all fun, which is another important element of my "morality".)
Alas, I do get the impression that in a standard academic discussion, one would use the term "morality" to refer to the sum-of-all-valu(ation rul)es that I am talking about. If there's a standard alternative term in moral philosophy then do please let me know.
If there's a better term than "morality" for the sum of all values and valuation rules, then this would free up "morality" for interpersonal values, which is closer to the common usage.
Some years ago, I was pondering what to say to the old cynical argument: If two monkeys want the same banana, in the end one will have it, and the other will cry morality. I think the particular context was about whether the word "rights", as in the context of "individual rights", meant anything. It had just been vehemently asserted (on the Extropians mailing list, I think) that this concept was meaningless and ought to be tossed out the window.
Suppose there are two people, a Mugger and a Muggee. The Mugger wants to take the Muggee's wallet. The Muggee doesn't want to give it to him. A cynic might say: "There is nothing more to say than this; they disagree. What use is it for the Muggee to claim that he has an individual_right to keep his wallet? The Mugger will just claim that he has an individual_right to take the wallet."
Now today I might introduce the notion of a 1-place versus 2-place function, and reply to the cynic, "Either they do not mean the same thing by individual_right, or at least one of them is very mistaken about what their common morality implies." At most one of these people is controlled by a good approximation of what I name when I say "morality", and the other one is definitely not.
But the cynic might just say again, "So what? That's what you say. The Mugger could just say the opposite. What meaning is there in such claims? What difference does it make?"
So I came up with this reply: "Suppose that I happen along this mugging. I will decide to side with the Muggee, not the Mugger, because I have the notion that the Mugger is interfering with the Muggee's individual_right to keep his wallet, rather than the Muggee interfering with the Mugger's individual_right to take it. And if a fourth person comes along, and must decide whether to allow my intervention, or alternatively stop me from treating on the Mugger's individual_right to take the wallet, then they are likely to side with the idea that I can intervene against the Mugger, in support of the Muggee."
Now this does not work as a metaethics; it does not work to define the word should. If you fell backward in time, to an era when no one on Earth thought that slavery was wrong, you should still help slaves escape their owners. Indeed, the era when such an act was done in heroic defiance of society and the law, was not so very long ago.
But to defend the notion of individual_rights against the charge of meaninglessness, the notion of third-party interventions and fourth-party allowances of those interventions, seems to me to coherently cash out what is asserted when we assert that an individual_right exists. To assert that someone has a right to keep their wallet, is to assert that third parties should help them keep it, and that fourth parties should applaud those who thus help.
This perspective does make a good deal of what is said about individual_rights into nonsense. "Everyone has a right to be free from starvation!" Um, who are you talking to? Nature? Perhaps you mean, "If you're starving, and someone else has a hamburger, I'll help you take it." If so, you should say so clearly. (See also The Death of Common Sense.)
So that is a notion of individual_rights, but what does it have to do with the more general question of interpersonal morality?
The notion is that you can construct interpersonal morality out of individual morality. Just as, in this particular example, I constructed the notion of what is asserted by talking about an individual_right, by making it an assertion about whether third parties should decide, for themselves, to intefere; and whether fourth parties should, individually, decide to applaud the interference.
Why go to such lengths to define things in individual terms? Some people might say: "To assert the existence of a right, is to say what society should do."
But societies don't always agree on things. And then you, as an individual, will have to decide what's right for you to do, in that case.
"But individuals don't always agree within themselves, either," you say. "They have emotional conflicts."
Well... you could say that and it would sound wise. But generally speaking, neurologically intact humans will end up doing some particular thing. As opposed to flopping around on the floor as their limbs twitch in different directions under the temporary control of different personalities. Contrast to a government or a corporation.
A human brain is a coherently adapted system whose parts have been together optimized for a common criterion of fitness (more or less). A group is not functionally optimized as a group. (You can verify this very quickly by looking at the sex ratios in a maternity hospital.) Individuals may be optimized to do well out of their collective interaction—but that is quite a different selection pressure, the adaptations for which do not always produce group agreement! So if you want to look at a coherent decision system, it really is a good idea to look at one human, rather than a bureaucracy.
I myself am one person—admittedly with a long trail of human history behind me that makes me what I am, maybe more than any thoughts I ever thought myself. But still, at the end of the day, I am writing this blog post; it is not the negotiated output of a consortium. It is quite easy for me to imagine being faced, as an individual, with a case where the local group does not agree within itself—and in such a case I must decide, as an individual, what is right. In general I must decide what is right! If I go along with the group that does not absolve me of responsibility. If there are any countries that think differently, they can write their own blog posts.
This perspective, which does not exhibit undefined behavior in the event of a group disagreement, is one reason why I tend to treat interpersonal morality as a special case of individual morality, and not the other way around.
Now, with that said, interpersonal morality is a highly distinguishable special case of morality.
As humans, we don't just hunt in groups, we argue in groups. We've probably been arguing linguistically in adaptive political contexts for long enough—hundreds of thousands of years, maybe millions—to have adapted specifically to that selection pressure.
So it shouldn't be all that surprising if we have moral intuitions, like fairness, that apply specifically to the morality of groups.
One of these intuitions seems to be universalizability.
If Dennis just strides around saying, "I want the whole pie! Give me the whole pie! What's fair is for me to get the whole pie! Not you, me!" then that's not going to persuade anyone else in the tribe. Dennis has not managed to frame his desires in a form which enable them to leap from one mind to another. His desires will not take wings and become interpersonal. He is not likely to leave many offspring.
Now, the evolution of interpersonal moral intuitions, is a topic which (he said, smiling grimly) deserves its own blog post. And its own academic subfield. (Anything out there besides The Evolutionary Origins of Morality? It seemed to me very basic.)
But I do think it worth noting that, rather than trying to manipulate 2-person and 3-person and 7-person interactions, some of our moral instincts seem to have made the leap to N-person interactions. We just think about general moral arguments. As though the values that leap from mind to mind, take on a life of their own and become something that you can reason about. To the extent that everyone in your environment does share some values, this will work as adaptive cognition. This creates moral intuitions that are not just interpersonal but transpersonal.
Transpersonal moral intuitions are not necessarily false-to-fact, so long as you don't expect your arguments cast in "universal" terms to sway a rock. There really is such a thing as the psychological unity of humankind. Read a morality tale from an entirely different culture; I bet you can figure out what it's trying to argue for, even if you don't agree with it.
The problem arises when you try to apply the universalizability instinct to say, "If this argument could not persuade an UnFriendly AI that tries to maximize the number of paperclips in the universe, then it must not be a good argument."
There are No Universally Compelling Arguments, so if you try to apply the universalizability instinct universally, you end up with no morality. Not even universalizability; the paperclip maximizer has no intuition of universalizability. It just chooses that action which leads to a future containing the maximum number of paperclips.
There are some things you just can't have a moral conversation with. There is not that within them that could respond to your arguments. You should think twice and maybe three times before ever saying this about one of your fellow humans—but a paperclip maximizer is another matter. You'll just have to override your moral instinct to regard anything labeled a "mind" as a little floating ghost-in-the-machine, with a hidden core of perfect emptiness, which could surely be persuaded to reject its mistaken source code if you just came up with the right argument. If you're going to preserve universalizability as an intuition, you can try extending it to all humans; but you can't extend it to rocks or chatbots, nor even powerful optimization processes like evolutions or paperclip maximizers.
The question of how much in-principle agreement would exist among human beings about the transpersonal portion of their values, given perfect knowledge of the facts and perhaps a much wider search of the argument space, is not a matter on which we can get much evidence by observing the prevalence of moral agreement and disagreement in today's world. Any disagreement might be something that the truth could destroy—dependent on a different view of how the world is, or maybe just dependent on having not yet heard the right argument. It is also possible that knowing more could dispel illusions of moral agreement, not just produce new accords.
But does that question really make much difference in day-to-day moral reasoning, if you're not trying to build a Friendly AI?
Part of The Metaethics Sequence
Next post: "Morality as Fixed Computation"
Previous post: "The Meaning of Right"