Followup to: Is Fairness Arbitrary?, Joy in the Merely Good, Sorting Pebbles Into Correct Heaps
Yesterday, I presented the idea that when only five people are present, having just stumbled across a pie in the woods (a naturally growing pie, that just popped out of the ground) then it is fair to give Dennis only 1/5th of this pie, even if Dennis persistently claims that it is fair for him to get the whole thing. Furthermore, it is meta-fair to follow such a symmetrical division procedure, even if Dennis insists that he ought to dictate the division procedure.
Fair, meta-fair, or meta-meta-fair, there is no level of fairness where you're obliged to concede everything to Dennis, without reciprocation or compensation, just because he demands it.
Which goes to say that fairness has a meaning beyond which "that which everyone can be convinced is 'fair'". This is an empty proposition, isomorphic to "Xyblz is that which everyone can be convinced is 'xyblz'". There must be some specific thing of which people are being convinced; and once you identify that thing, it has a meaning beyond agreements and convincing.
You're not introducing something arbitrary, something un-fair, in refusing to concede everything to Dennis. You are being fair, and meta-fair and meta-meta-fair. As far up as you go, there's no level that calls for unconditional surrender. The stars do not judge between you and Dennis—but it is baked into the very question that is asked, when you ask, "What is fair?" as opposed to "What is xyblz?"
Ah, but why should you be fair, rather than xyblz? Let us concede that Dennis cannot validly persuade us, on any level, that it is fair for him to dictate terms and give himself the whole pie; but perhaps he could argue whether we should be fair?
The hidden agenda of the whole discussion of fairness, of course, is that good-ness and right-ness and should-ness, ground out similarly to fairness.
Natural selection optimizes for inclusive genetic fitness. This is not a disagreement with humans about what is good. It is simply that natural selection does not do what is good: it optimizes for inclusive genetic fitness.
Well, since some optimization processes optimize for inclusive genetic fitness, instead of what is good, which should we do, ourselves?
I know my answer to this question. It has something to do with natural selection being a terribly wasteful and stupid and inefficient process. It has something to do with elephants starving to death in their old age when they wear out their last set of teeth. It has something to do with natural selection never choosing a single act of mercy, of grace, even when it would cost its purpose nothing: not auto-anesthetizing a wounded and dying gazelle, when its pain no longer serves even the adaptive purpose that first created pain. Evolution had to happen sometime in the history of the universe, because that's the only way that intelligence could first come into being, without brains to make brains; but now that era is over, and good riddance.
But most of all—why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good? What is even the appeal of this, morally or otherwise? At all? I know people who claim to think like this, and I wonder what wrong turn they made in their cognitive history, and I wonder how to get them to snap out of it.
When we take a step back from fairness, and ask if we should be fair, the answer may not always be yes. Maybe sometimes we should be merciful. But if you ask if it is meta-fair to be fair, the answer will generally be yes. Even if someone else wants you to be unfair in their favor, or claims to disagree about what is "fair", it will still generally be meta-fair to be fair, even if you can't make the Other agree. By the same token, if you ask if we meta-should do what we should, rather than something else, the answer is yes. Even if some other agent or optimization process does not do what is right, that doesn't change what is meta-right.
And this is not "arbitrary" in the sense of rolling dice, not "arbitrary" in the sense that justification is expected and then not found. The accusations that I level against evolution are not merely pulled from a hat; they are expressions of morality as I understand it. They are merely moral, and there is nothing mere about that.
In "Arbitrary" I finished by saying:
The upshot is that differently structured minds may well label different propositions with their analogues of the internal label "arbitrary"—though only one of these labels is what you mean when you say "arbitrary", so you and these other agents do not really have a disagreement.
This was to help shake people loose of the idea that if any two possible minds can say or do different things, then it must all be arbitrary. Different minds may have different ideas of what's "arbitrary", so clearly this whole business of "arbitrariness" is arbitrary, and we should ignore it. After all, Sinned (the anti-Dennis) just always says "Morality isn't arbitrary!" no matter how you try to persuade her otherwise, so clearly you're just being arbitrary in saying that morality is arbitrary.
From the perspective of a human, saying that one should sort pebbles into prime-numbered heaps is arbitrary—it's the sort of act you'd expect to come with a justification attached, but there isn't any justification.
From the perspective of a Pebblesorter, saying that one p-should scatter a heap of 38 pebbles into two heaps of 19 pebbles is not p-arbitrary at all—it's the most p-important thing in the world, and fully p-justified by the intuitively obvious fact that a heap of 19 pebbles is p-correct and a heap of 38 pebbles is not.
So which perspective should we adopt? I answer that I see no reason at all why I should start sorting pebble-heaps. It strikes me as a completely pointless activity. Better to engage in art, or music, or science, or heck, better to connive political plots of terrifying dark elegance, than to sort pebbles into prime-numbered heaps. A galaxy transformed into pebbles and sorted into prime-numbered heaps would be just plain boring.
The Pebblesorters, of course, would only reason that music is p-pointless because it doesn't help you sort pebbles into heaps; the human activity of humor is not only p-pointless but just plain p-bizarre and p-incomprehensible; and most of all, the human vision of a galaxy in which agents are running around experiencing positive reinforcement but not sorting any pebbles, is a vision of an utterly p-arbitrary galaxy devoid of p-purpose. The Pebblesorters would gladly sacrifice their lives to create a P-Friendly AI that sorted the galaxy on their behalf; it would be the most p-profound statement they could make about the p-meaning of their lives.
So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right. I do not know perfectly what is right, but neither can I plead entire ignorance.
And the Pebblesorters, who simply are not built to do what is right, choose the Pebblesorting perspective: not merely because it is theirs, or because they think they can get away with being p-arbitrary, but because that is what is p-right.
And in fact, both we and the Pebblesorters can agree on all these points. We can agree that sorting pebbles into prime-numbered heaps is arbitrary and unjustified, but not p-arbitrary or p-unjustified; that it is the sort of thing an agent p-should do, but not the sort of thing an agent should do.
I fully expect that even if there is other life in the universe only a few trillions of lightyears away (I don't think it's local, or we would have seen it by now), that we humans are the only creatures for a long long way indeed who are built to do what is right. That may be a moral miracle, but it is not a causal miracle.
There may be some other evolved races, a sizable fraction perhaps, maybe even a majority, who do some right things. Our executing adaptation of compassion is not so far removed from the game theory that gave it birth; it might be a common adaptation. But laughter, I suspect, may be rarer by far than mercy. What would a galactic civilization be like, if it had sympathy, but never a moment of humor? A little more boring, perhaps, by our standards.
This humanity that we find ourselves in, is a great gift. It may not be a great p-gift, but who cares about p-gifts?
So I really must deny the charges of moral relativism: I don't think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that. We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don't. Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don't. Human morality is p-arbitrary, but who cares? P-arbitrariness is arbitrary.
You've just got to avoid thinking that the words "better" and "p-better", or "moral" and "p-moral", are talking about the same thing—because then you might think that the Pebblesorters were coming to different conclusions than us about the same thing—and then you might be tempted to think that our own morals were arbitrary. Which, of course, they're not.
Yes, I really truly do believe that humanity is better than the Pebblesorters! I am not being sarcastic, I really do believe that. I am not playing games by redefining "good" or "arbitrary", I think I mean the same thing by those terms as everyone else. When you understand that I am genuinely sincere about that, you will understand my metaethics. I really don't consider myself a moral relativist—not even in the slightest!
Part of The Metaethics Sequence
Next post: "You Provably Can't Trust Yourself"
Previous post: "Is Fairness Arbitrary?"
@Eliezer: "what one ought to do" vs. "what one p-ought to do"
Suppose that the pebblesorter civilization and the human civilization meet, and (fairly predictably) engage in a violent and bitter war for control of the galaxy. Why can you not resolve this war by bringing the pebblesorters and the humans to a negotiating table and telling them "humans do what they ought to do and Pebblesorters do what they p-ought to do"?
You cannot play this trick because p-ought is grounded in what the pebblesorters actually do, which is in turn grounded in the state of the universe they aim for, which is the same universe that we live in. The humans and the pebblesorters seem to be disagreeing about something as they fight each other: the usual way that people would put this disagreement into words is by saying "they are disagreeing about what is right".
However, you are using the word "right" in a nonstandard way. You have changed the meaning of the entire ethical vocabulary in this same way, to represent a specific constant answer rather than a variable, so it becomes very hard to say what the humans and pebblesorters are disagreeing about. It seems a little odd to say that these hated enemies are in complete agreement, and it is certainly not the standard way that people use the ethical vocabulary. Perhaps it is a better way: I'm just taking some time getting used to it.
In fact in your new use of the English language, you probably are not a relativist, for the way you are using the ethical vocabulary it is in fact impossible to be a relativist: all ethical theories T describe some objective predicate, T-right, and any act is either T-right or it isn't. In your new language, it isn't possible to talk of "rightness" detached from any particular predicate.
But I think that in your new use of language, you will need a word for the idea of a justification for an ethical theory, for example Kant's arguments "from first principles" in favor of the categorical imperative. Perhaps you could call ethical theories with this property "first-principles justified theories"? You may argue that no such theory exists, but a lot of philosophers would disagree, so you should have a word for it. And your ethical theory doesn't even try for this property, it is unashamedly unjustified.
Eliezer said: "Furthermore, I believe that human beings are better than Pebblesorters.
In your new use of the ethical vocabulary, this is a vacuous applause light. Of course the humans are better than the pebblesorters: you defined "good" as "the predicate that describes the particular set of things that humans do".