Followup to: Is Morality Given?, Is Morality Preference?, Moral Complexities, Could Anything Be Right?, The Bedrock of Fairness, ...
Intuitions about morality seem to split up into two broad camps: morality-as-given and morality-as-preference.
Some perceive morality as a fixed given, independent of our whims, about which we form changeable beliefs. This view's great advantage is that it seems more normal up at the level of everyday moral conversations: it is the intuition underlying our everyday notions of "moral error", "moral progress", "moral argument", or "just because you want to murder someone doesn't make it right".
Others choose to describe morality as a preference—as a desire in some particular person; nowhere else is it written. This view's great advantage is that it has an easier time living with reductionism—fitting the notion of "morality" into a universe of mere physics. It has an easier time at the meta level, answering questions like "What is morality?" and "Where does morality come from?"
Both intuitions must contend with seemingly impossible questions. For example, Moore's Open Question: Even if you come up with some simple answer that fits on T-Shirt, like "Happiness is the sum total of goodness!", you would need to argue the identity. It isn't instantly obvious to everyone that goodness is happiness, which seems to indicate that happiness and rightness were different concepts to start with. What was that second concept, then, originally?
Or if "Morality is mere preference!" then why care about human preferences? How is it possible to establish any "ought" at all, in a universe seemingly of mere "is"?
So what we should want, ideally, is a metaethic that:
- Adds up to moral normality, including moral errors, moral progress, and things you should do whether you want to or not;
- Fits naturally into a non-mysterious universe, postulating no exception to reductionism;
- Does not oversimplify humanity's complicated moral arguments and many terminal values;
- Answers all the impossible questions.
I'll present that view tomorrow.
Today's post is devoted to setting up the question.
Consider "free will", already dealt with in these posts. On one level of organization, we have mere physics, particles that make no choices. On another level of organization, we have human minds that extrapolate possible futures and choose between them. How can we control anything, even our own choices, when the universe is deterministic?
To dissolve the puzzle of free will, you have to simultaneously imagine two levels of organization while keeping them conceptually distinct. To get it on a gut level, you have to see the level transition—the way in which free will is how the human decision algorithm feels from inside. (Being told flatly "one level emerges from the other" just relates them by a magical transition rule, "emergence".)
For free will, the key is to understand how your brain computes whether you "could" do something—the algorithm that labels reachable states. Once you understand this label, it does not appear particularly meaningless—"could" makes sense—and the label does not conflict with physics following a deterministic course. If you can see that, you can see that there is no conflict between your feeling of freedom, and deterministic physics. Indeed, I am perfectly willing to say that the feeling of freedom is correct, when the feeling is interpreted correctly.
In the case of morality, once again there are two levels of organization, seemingly quite difficult to fit together:
On one level, there are just particles without a shred of should-ness built into them—just like an electron has no notion of what it "could" do—or just like a flipping coin is not uncertain of its own result.
On another level is the ordinary morality of everyday life: moral errors, moral progress, and things you ought to do whether you want to do them or not.
And in between, the level transition question: What is this should-ness stuff?
Award yourself a point if you thought, "But wait, that problem isn't quite analogous to the one of free will. With free will it was just a question of factual investigation—look at human psychology, figure out how it does in fact generate the feeling of freedom. But here, it won't be enough to figure out how the mind generates its feelings of should-ness. Even after we know, we'll be left with a remaining question—is that how we should calculate should-ness? So it's not just a matter of sheer factual reductionism, it's a moral question."
Award yourself two points if you thought, "...oh, wait, I recognize that pattern: It's one of those strange loops through the meta-level we were talking about earlier."
And if you've been reading along this whole time, you know the answer isn't going to be, "Look at this fundamentally moral stuff!"
Nor even, "Sorry, morality is mere preference, and right-ness is just what serves you or your genes; all your moral intuitions otherwise are wrong, but I won't explain where they come from."
Of the art of answering impossible questions, I have already said much: Indeed, vast segments of my Overcoming Bias posts were created with that specific hidden agenda.
The sequence on anticipation fed into Mysterious Answers to Mysterious Questions, to prevent the Primary Catastrophic Failure of stopping on a poor answer.
The Fake Utility Functions sequence was directed at the problem of oversimplified moral answers particularly.
The sequence on words provided the first and basic illustration of the Mind Projection Fallacy, the understanding of which is one of the Great Keys.
The sequence on words also showed us how to play Rationalist's Taboo, and Replace the Symbol with the Substance. What is "right", if you can't say "good" or "desirable" or "better" or "preferable" or "moral" or "should"? What happens if you try to carry out the operation of replacing the symbol with what it stands for?
And the sequence on quantum physics, among other purposes, was there to teach the fine art of not running away from Scary and Confusing Problems, even if others have failed to solve them, even if great minds failed to solve them for generations. Heroes screw up, time moves on, and each succeeding era gets an entirely new chance.
If you're just joining us here (Belldandy help you) then you might want to think about reading all those posts before, oh, say, tomorrow.
If you've been reading this whole time, then you should think about trying to dissolve the question on your own, before tomorrow. It doesn't require more than 96 insights beyond those already provided.
Next: The Meaning of Right.
Part of The Metaethics Sequence
Next post: "The Meaning of Right"
Previous post: "Changing Your Metaethics"
@Tiiba I think you nailed it on the head. That is pretty much my view but you worded it better than I ever could. There is no The Meta-Morality. There are multiple possible memes(moralities and meta-moralities) and some work better than others at producing and keeping civilizations from falling apart.
@Eliezer I am very interested in reading your meta-morality theory. Do you think it will be universally compelling to humans, or at least non brain damaged humans? Assuming there are humans out there who would not accept the theory, I am curious how those who do accept the theory 'should' react to them.
As for myself, I have my own idea of a meta-morality but it's kind of rough at the moment. The gist of it involves bubbles. The basic bubble is the individual, than individual bubbles come together to form a new bubble containing the previous bubbles; families etc. etc. until you have the country bubbles and the world bubble. Any bubble can run under it's own rules as long as it doesn't interfere with other bubbles. If there is interference the smaller bubbles usually have priority over their own content. So for example no unconsented violence because individual bubbles have priority when it comes to their own bodies(content of individual bubbles), unless it's the only way to prevent them from harming other individuals. Private gay stuff between 2 consenting adults is ok because it's 2 individual bubbles coming together to make a 3d bubble and they have more say about their rules than anyone on the outside. Countries can have their own laws and rules but they may not hold or harm any smaller bubbles within them. At most they could expel them. Yeah it's still kind of rough. I've dreamed up this system with the idea that a centralized super intelligence would be enforcing the rules. It's probably not feasible without one. If this seems incomprehensible just ignore this paragraph.