Followup to: The Bedrock of Fairness
Discussions of morality seem to me to often end up turning around two different intuitions, which I might label morality-as-preference and morality-as-given. The former crowd tends to equate morality with what people want; the latter to regard morality as something you can't change by changing people.
As for me, I have my own notions, which I am working up to presenting. But above all, I try to avoid avoiding difficult questions. Here are what I see as (some of) the difficult questions for the two intuitions:
- For morality-as-preference:
- Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?
- When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?
- Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?
- For morality-as-given:
- Would it be possible for everyone in the world to be wrong about morality, and wrong about how to update their beliefs about morality, and wrong about how to choose between metamoralities, etcetera? So that there would be a morality, but it would be entirely outside our frame of reference? What distinguishes this state of affairs, from finding a random stone tablet showing the words "You should commit suicide"?
- How does a world in which a moral proposition is true, differ from a world in which that moral proposition is false? If the answer is "no", how does anyone perceive moral givens?
- Is it better for people to be happy than sad? If so, why does morality look amazingly like godshatter of natural selection?
- Am I not allowed to construct an alien mind that evaluates morality differently? What will stop me from doing so?
Part of The Metaethics Sequence
Next post: "Is Morality Preference?"
Previous post: "The Bedrock of Fairness"
I fall closer to the morality-as-preference camp, although I'd add two major caveats.
One is that some of these preferences are deeply programmed into the human brain (i.e. "Punish the cheater" can be found in other primates too), as instincts which give us a qualitatively different emotional response than the instincts for direct satisfaction of our desires. The fact that these instincts feel different from (say) hunger or sexual desire goes a long way towards answering your first question for me. A moral impulse feels more like a perception of an external reality than a statement of a personal preference, so we treat it differently in argument.
The second caveat is that because these feel like perceptions, humans of all times and places have put much effort into trying to reconcile these moral impulses into a coherent perception of an objective moral order, denying some impulses where they conflict and manufacturing moral feeling in cases where we "should" feel it for consistency's sake. The brain is plastic enough that we can in fact do this to a surprising extent. Now, some reconciliations clearly work better than others from an interior standpoint (i.e. they cause less anguish and cognitive dissonance in the moral agent). This partially answers the second question about moral progress— the act of moving from one attempted framework to one that feels more coherent with one's stronger moral impulses and with one's reasoning.
And for the last question, the moral impulses are strong instincts, but sometimes others are stronger; and then we feel the conflict as "doing what we shouldn't".
That's where I stand for now. I'm interested to see your interpretation.