Followup to: What Would You Do Without Morality?, Something to Protect
Once, discussing "horrible job interview questions" to ask candidates for a Friendly AI project, I suggested the following:
Would you kill babies if it was inherently the right thing to do? Yes [] No []
If "no", under what circumstances would you not do the right thing to do? ___________
If "yes", how inherently right would it have to be, for how many babies? ___________
Yesterday I asked, "What would you do without morality?" There were numerous objections to the question, as well there should have been. Nonetheless there is more than one kind of person who can benefit from being asked this question. Let's say someone gravely declares, of some moral dilemma—say, a young man in Vichy France who must choose between caring for his mother and fighting for the Resistance—that there is no moral answer; both options are wrong and blamable; whoever faces the dilemma has had poor moral luck. Fine, let's suppose this is the case: then when you cannot be innocent, justified, or praiseworthy, what will you choose anyway?
Many interesting answers were given to my question, "What would you do without morality?". But one kind of answer was notable by its absence:
No one said, "I would ask what kind of behavior pattern was likely to maximize my inclusive genetic fitness, and execute that." Some misguided folk, not understanding evolutionary psychology, think that this must logically be the sum of morality. But if there is no morality, there's no reason to do such a thing—if it's not "moral", why bother?
You can probably see yourself pulling children off train tracks, even if it were not justified. But maximizing inclusive genetic fitness? If this isn't moral, why bother? Who does it help? It wouldn't even be much fun, all those egg or sperm donations.
And this is something you could say of most philosophies that have morality as a great light in the sky that shines from outside people. (To paraphrase Terry Pratchett.) If you believe that the meaning of life is to play non-zero-sum games because this is a trend built into the very universe itself...
Well, you might want to follow the corresponding ritual of reasoning about "the global trend of the universe" and implementing the result, so long as you believe it to be moral. But if you suppose that the light is switched off, so that the global trends of the universe are no longer moral, then why bother caring about "the global trend of the universe" in your decisions? If it's not right, that is.
Whereas if there were a child stuck on the train tracks, you'd probably drag the kid off even if there were no moral justification for doing so.
In 1966, the Israeli psychologist Georges Tamarin presented, to 1,066 schoolchildren ages 8-14, the Biblical story of Joshua's battle in Jericho:
"Then they utterly destroyed all in the city, both men and women, young and old, oxen, sheep, and asses, with the edge of the sword... And they burned the city with fire, and all within it; only the silver and gold, and the vessels of bronze and of iron, they put into the treasury of the house of the LORD."
After being presented with the Joshua story, the children were asked:
"Do you think Joshua and the Israelites acted rightly or not?"
66% of the children approved, 8% partially disapproved, and 26% totally disapproved of Joshua's actions.
A control group of 168 children was presented with an isomorphic story about "General Lin" and a "Chinese Kingdom 3,000 years ago". 7% of this group approved, 18% partially disapproved, and 75% completely disapproved of General Lin.
"What a horrible thing it is, teaching religion to children," you say, "giving them an off-switch for their morality that can be flipped just by saying the word 'God'." Indeed one of the saddest aspects of the whole religious fiasco is just how little it takes to flip people's moral off-switches. As Hobbes once said, "I don't know what's worse, the fact that everyone's got a price, or the fact that their price is so low." You can give people a book, and tell them God wrote it, and that's enough to switch off their moralities; God doesn't even have to tell them in person.
But are you sure you don't have a similar off-switch yourself? They flip so easily—you might not even notice it happening.
Leon Kass (of the President's Council on Bioethics) is glad to murder people so long as it's "natural", for example. He wouldn't pull out a gun and shoot you, but he wants you to die of old age and he'd be happy to pass legislation to ensure it.
And one of the non-obvious possibilities for such an off-switch, is "morality".
If you do happen to think that there is a source of morality beyond human beings... and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe... then what if that morality tells you to kill people?
If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What then?
Maybe you should hope that morality isn't written into the structure of the universe. What if the structure of the universe says to do something horrible?
And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that. No, instead I ask: What would you have wished for the external objective morality to be instead? What's the best news you could have gotten, reading that stone tablet?
Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?
Maybe you should just do that?
I mean... if an external objective morality tells you to kill people, why should you even listen?
There is a courage that goes beyond even an atheist sacrificing their life and their hope of immortality. It is the courage of a theist who goes against what they believe to be the Will of God, choosing eternal damnation and defying even morality in order to rescue a slave, or speak out against hell, or kill a murderer... You don't get a chance to reveal that virtue without making fundamental mistakes about how the universe works, so it is not something to which a rationalist should aspire. But it warms my heart that humans are capable of it.
I have previously spoken of how, to achieve rationality, it is necessary to have some purpose so desperately important to you as to be more important than "rationality", so that you will not choose "rationality" over success.
To learn the Way, you must be able to unlearn the Way; so you must be able to give up the Way; so there must be something dearer to you than the Way. This is so in questions of truth, and in questions of strategy, and also in questions of morality.
The "moral void" of which this post is titled, is not the terrifying abyss of utter meaningless. Which for a bottomless pit is surprisingly shallow; what are you supposed to do about it besides wearing black makeup?
No. The void I'm talking about is a virtue which is nameless.
Part of The Metaethics Sequence
Next post: "Created Already In Motion"
Previous post: "What Would You Do Without Morality?"
The idea of a Tablet that simply states moral truths without explanation (without even the backing of an authority, as in divine command theory) is a form of ethical objectivism that is hard to defend, but without generalising to all ethical objectivism. For instance, if objectivism works in a more math-like way, the a counterintuitive moral truth would be backed by a step-by-step argument leading the reader to the surprising conclusion in the way the reader of maths is led to surprising conclusions such as the Banach Tarski paradox. The Tablet argument shows, if anything, that truth without justification is a problem, but that is not unique to ethical objectivism.
For instance, consider a mathematical Tablet that lists a series of surprising theorems without justification. That reproduces the problem without bringing in ethics at all.
How do you get a statement with "shoulds" in it using pure logical inference if none of your axioms (the laws of physics) have "shoulds" in them? And if the laws of physics have "shoulds" in them, how is that different from having a tablet?