Followup to: The Bedrock of Fairness
Discussions of morality seem to me to often end up turning around two different intuitions, which I might label morality-as-preference and morality-as-given. The former crowd tends to equate morality with what people want; the latter to regard morality as something you can't change by changing people.
As for me, I have my own notions, which I am working up to presenting. But above all, I try to avoid avoiding difficult questions. Here are what I see as (some of) the difficult questions for the two intuitions:
- For morality-as-preference:
- Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?
- When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?
- Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?
- For morality-as-given:
- Would it be possible for everyone in the world to be wrong about morality, and wrong about how to update their beliefs about morality, and wrong about how to choose between metamoralities, etcetera? So that there would be a morality, but it would be entirely outside our frame of reference? What distinguishes this state of affairs, from finding a random stone tablet showing the words "You should commit suicide"?
- How does a world in which a moral proposition is true, differ from a world in which that moral proposition is false? If the answer is "no", how does anyone perceive moral givens?
- Is it better for people to be happy than sad? If so, why does morality look amazingly like godshatter of natural selection?
- Am I not allowed to construct an alien mind that evaluates morality differently? What will stop me from doing so?
Part of The Metaethics Sequence
Next post: "Is Morality Preference?"
Previous post: "The Bedrock of Fairness"
I think the answer (to why this behavior adds up to normality) is in the spectrum of semantics of knowledge that people operate with. Some knowledge is primarily perception, and reflects what is clearly possible or what clearly already is. Other kind of "knowledge" is about goals: it reflects what states of environment are desirable, and not necessarily which states are in fact possible. These concepts drive the behavior, each pushing in its own direction: perception shows what is possible, goals show where to steer the boat. But if these concepts have similar implementation and many intermediate grades, it would explain the resulting confusion: some of the concepts (subgoals) start to indicate things that are somewhat desirable and maybe possible, and so on.
In the case of moral argument, what a person wants corresponds to pure goals and has little feasibility part in it ("I want to get the whole pie"). "What is morally right" adds a measure of feasibility, since such question is posed in the context of many people participating at the same time, so since everyone getting the whole pie is not feasible, it is not in answer in that case. Each person is a goal-directed agent, operating towards certain a-priory infeasible goals, plotting feasible plants towards them. In the context of society, these plans are developed so as to satisfy the real-world constraints that it imposes.
Thus, "morally right" behavior is not the content of goal-for-society, it is an adapted action plan of individual agents towards their own infeasible-here goals. How to formulate the goal-for-society, I don't know, but it seems to have little to do with what presently forms as morally right behavior. It would need to be derived from goals of individual agents somehow.