Followup to: Is Morality Given?, Is Morality Preference?, Moral Complexities, Could Anything Be Right?, The Bedrock of Fairness, ...
Intuitions about morality seem to split up into two broad camps: morality-as-given and morality-as-preference.
Some perceive morality as a fixed given, independent of our whims, about which we form changeable beliefs. This view's great advantage is that it seems more normal up at the level of everyday moral conversations: it is the intuition underlying our everyday notions of "moral error", "moral progress", "moral argument", or "just because you want to murder someone doesn't make it right".
Others choose to describe morality as a preference—as a desire in some particular person; nowhere else is it written. This view's great advantage is that it has an easier time living with reductionism—fitting the notion of "morality" into a universe of mere physics. It has an easier time at the meta level, answering questions like "What is morality?" and "Where does morality come from?"
Both intuitions must contend with seemingly impossible questions. For example, Moore's Open Question: Even if you come up with some simple answer that fits on T-Shirt, like "Happiness is the sum total of goodness!", you would need to argue the identity. It isn't instantly obvious to everyone that goodness is happiness, which seems to indicate that happiness and rightness were different concepts to start with. What was that second concept, then, originally?
Or if "Morality is mere preference!" then why care about human preferences? How is it possible to establish any "ought" at all, in a universe seemingly of mere "is"?
So what we should want, ideally, is a metaethic that:
- Adds up to moral normality, including moral errors, moral progress, and things you should do whether you want to or not;
- Fits naturally into a non-mysterious universe, postulating no exception to reductionism;
- Does not oversimplify humanity's complicated moral arguments and many terminal values;
- Answers all the impossible questions.
I'll present that view tomorrow.
Today's post is devoted to setting up the question.
Consider "free will", already dealt with in these posts. On one level of organization, we have mere physics, particles that make no choices. On another level of organization, we have human minds that extrapolate possible futures and choose between them. How can we control anything, even our own choices, when the universe is deterministic?
To dissolve the puzzle of free will, you have to simultaneously imagine two levels of organization while keeping them conceptually distinct. To get it on a gut level, you have to see the level transition—the way in which free will is how the human decision algorithm feels from inside. (Being told flatly "one level emerges from the other" just relates them by a magical transition rule, "emergence".)
For free will, the key is to understand how your brain computes whether you "could" do something—the algorithm that labels reachable states. Once you understand this label, it does not appear particularly meaningless—"could" makes sense—and the label does not conflict with physics following a deterministic course. If you can see that, you can see that there is no conflict between your feeling of freedom, and deterministic physics. Indeed, I am perfectly willing to say that the feeling of freedom is correct, when the feeling is interpreted correctly.
In the case of morality, once again there are two levels of organization, seemingly quite difficult to fit together:
On one level, there are just particles without a shred of should-ness built into them—just like an electron has no notion of what it "could" do—or just like a flipping coin is not uncertain of its own result.
On another level is the ordinary morality of everyday life: moral errors, moral progress, and things you ought to do whether you want to do them or not.
And in between, the level transition question: What is this should-ness stuff?
Award yourself a point if you thought, "But wait, that problem isn't quite analogous to the one of free will. With free will it was just a question of factual investigation—look at human psychology, figure out how it does in fact generate the feeling of freedom. But here, it won't be enough to figure out how the mind generates its feelings of should-ness. Even after we know, we'll be left with a remaining question—is that how we should calculate should-ness? So it's not just a matter of sheer factual reductionism, it's a moral question."
Award yourself two points if you thought, "...oh, wait, I recognize that pattern: It's one of those strange loops through the meta-level we were talking about earlier."
And if you've been reading along this whole time, you know the answer isn't going to be, "Look at this fundamentally moral stuff!"
Nor even, "Sorry, morality is mere preference, and right-ness is just what serves you or your genes; all your moral intuitions otherwise are wrong, but I won't explain where they come from."
Of the art of answering impossible questions, I have already said much: Indeed, vast segments of my Overcoming Bias posts were created with that specific hidden agenda.
The sequence on anticipation fed into Mysterious Answers to Mysterious Questions, to prevent the Primary Catastrophic Failure of stopping on a poor answer.
The Fake Utility Functions sequence was directed at the problem of oversimplified moral answers particularly.
The sequence on words provided the first and basic illustration of the Mind Projection Fallacy, the understanding of which is one of the Great Keys.
The sequence on words also showed us how to play Rationalist's Taboo, and Replace the Symbol with the Substance. What is "right", if you can't say "good" or "desirable" or "better" or "preferable" or "moral" or "should"? What happens if you try to carry out the operation of replacing the symbol with what it stands for?
And the sequence on quantum physics, among other purposes, was there to teach the fine art of not running away from Scary and Confusing Problems, even if others have failed to solve them, even if great minds failed to solve them for generations. Heroes screw up, time moves on, and each succeeding era gets an entirely new chance.
If you're just joining us here (Belldandy help you) then you might want to think about reading all those posts before, oh, say, tomorrow.
If you've been reading this whole time, then you should think about trying to dissolve the question on your own, before tomorrow. It doesn't require more than 96 insights beyond those already provided.
Next: The Meaning of Right.
Part of The Metaethics Sequence
Next post: "The Meaning of Right"
Previous post: "Changing Your Metaethics"
Well, I find that my metamorality meets those criteria, with one exception.
To reiterate once, I think that the foundations of morality as we understand it are certain evolved impulses like the ones we can find in other primates (maternal love, desire to punish a cheater, etc); these are like other emotions, with one key difference: the social component that we expect and rely on others having the same reaction, and accordingly we experience other emotions as more subjective and our moral impulses as more objective.
Note that when I'm afraid of something, and you're not, this may surprise me but doesn't anger me; but if I feel moral outrage at something, and you don't, then I'm liable to get angry with you.
But of course our moralities aren't just these few basic impulses. Given our capacity for complex thought and for passing down complex cultures, we've built up many systems of morality that try to integrate all these impulses. It's a testament of the power of conscious thought to reshape our very perceptions of the world that we can get away with this— we foment one moral impulse to restrain another when our system tells us so, and we can work up a moral sentiment in extended contexts when our system tells us to do so. (When we fail to correctly extrapolate and apply our moral system, we later think of this as a moral error.)
Of course, some moral systems cohere logically better than others (which is good if we want to think of them as objective), some have better observable consequences, and some require less strenuous effort at reinterpreting experience. Moving from one moral system to another which improves in some of these areas is generally what we call "moral progress".
This account has no problems with #2 and #3; I don't see an "impossible question" suggesting itself (though I'm open to suggestions); the only divergence from your desired properties is that it only claims that we can hardly help but believe that some things are right objectively, whether we want them or not. It's not impossible for an alien species to evolve to conscious thought without any such concept of objective morality, or with one that differs from ours on the most crucial of points (say, our immediate moral pain at seeing something like us suffer); and there'd be nothing in the universe to say which one of us is "right".
In essence, I think that Subhan is weakly on the right track, but he doesn't realize that there are some human impulses stronger than anything we'd call "preference", or that a mix of moral impulse and reasoning and reclassifying of experience is at stake and is that much more complex than the interactions he supposes. Since we as humans have in common both the first-order moral impulses and the perception that these are objective and thus ought to be logically coherent, we aren't in fact free to construct our moral systems with too many degrees of freedom.
Sorry for the overlong comment. I'm eager to see what tomorrow's post will bring...