When I say I think I can solve (some of) metaethics, what exactly is it that I think I can solve?
First, we must distinguish the study of ethics or morality from the anthropology of moral belief and practice. The first one asks: "What is right?" The second one asks: "What do people think is right?" Of course, one can inform the other, but it's important not to confuse the two. One can correctly say that different cultures have different 'morals' in that they have different moral beliefs and practices, but this may not answer the question of whether or not they are behaving in morally right ways.
My focus is metaethics, so I'll discuss the anthropology of moral belief and practice only when it is relevant for making points about metaethics.
So what is metaethics? Many people break the field of ethics into three sub-fields: applied ethics, normative ethics, and metaethics.
Applied ethics: Is abortion morally right? How should we treat animals? What political and economic systems are most moral? What are the moral responsibilities of businesses? How should doctors respond to complex and uncertain situations? When is lying acceptable? What kinds of sex are right or wrong? Is euthanasia acceptable?
Normative ethics: What moral principles should we use in order to decide how to treat animals, when lying is acceptable, and so on? Is morality decided by what produces the greatest good for the greatest number? Is it decided by a list of unbreakable rules? Is it decided by a list of character virtues? Is it decided by a hypothetical social contract drafted under ideal circumstances?
Metaethics: What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?
Others prefer to combine applied ethics and normative ethics so that the breakdown becomes: normative ethics vs. metaethics, or 'first order' moral questions (normative ethics) vs. 'second order' questions (metaethics).
Mainstream views in metaethics
To illustrate how people can give different answers to the questions of metaethics, let me summarize some of the mainstream philosophical positions in metaethics.
Cognitivism vs. non-cognitivism: This is a debate about what is happening when people engage in moral discourse. When someone says "Murder is wrong," are they trying to state a fact about murder, that it has the property of being wrong? Or are they merely expressing a negative emotion toward murder, as if they had gasped aloud and said "Murder!" with a disapproving tone?
Another way of saying this is that cognitivists think moral discourse is 'truth-apt' - that is, moral statements are the kinds of things that can be true or false. Some cognitivists think that all moral claims are in fact false (error theory), just as the atheist thinks that claims about gods are usually meant to be fact-stating but in fact are all false because gods don't exist.1 Other cognitivists think that at least some moral claims are true. Naturalism holds that moral judgments are true or false because of natural facts,2 while non-naturalism holds that moral judgments are true or false because of non-natural facts.3 Weak cognitivism holds that moral judgments can be true or false not because they agree with certain (natural or non-natural) opinion-independent facts, but because our considered opinions determine the moral facts.4
Non-cognitivists, in contrast, tend to think that moral discourse is not truth-apt. Ayer (1936) held that moral sentences express our emotions ("Murder? Yuck!") about certain actions. This is called emotivism or expressivism. Another theory is prescriptivism, the idea that moral sentences express commands ("Don't murder!").5 Or perhaps moral judgments express our acceptance of certain norms (norm expressivism).6 Or maybe our moral judgments express our dispositions to form sentiments of approval or disapproval (quasi-realism).7
Moral psychology: One major debate in moral psychology concerns whether moral judgments require some (defeasible) motivation to adhere to the moral judgment (motivational internalism), or whether one can make a moral judgment without being motivated to adhere to it (motivational externalism). Another debate concerns whether motivation depends on both beliefs and desires (the Humean theory of motivation), or whether some beliefs are by themselves intrinsically motivating (non-Humean theories of motivation).
More recently, researchers have run a number of experiments to test the mechanisms by which people make moral judgments. I will list a few of the most surprising and famous results:
- Whether we judge an action as 'intentional' or not often depends on the judged goodness or badness of the action, not the internal states of the agent.8
- Our moral judgments are significantly affected by whether we are in the presence of freshly baked bread or a low concentration of fart spray that only the subconscious mind can detect.9
- Our moral judgments are greatly affected by pointing magnets at the point in our brain that processes theory of mind.10
- People tend to insist that certain things are right or wrong even when a hypothetical situation is constructed such that they admit they can give no reason for their judgment.11
- We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from evolutionarily older parts of our brains.12
- People give harsher moral judgments when they feel clean.13
Moral epistemology: Different views on cognitivism vs. non-cognitivism and moral psychology suggest different views of moral epistemology. How can we know moral facts? Non-cognitivists and error theorists think there are no moral facts to be known. Those who believe moral facts answer to non-natural facts tend to think that moral knowledge comes from intuition, which somehow has access to non-natural facts. Moral naturalists tend to think that moral facts can be accessed simply by doing science.
Tying it all together
I will not be trying very hard to fit my pluralistic moral reductionism into these categories. I'll be arguing about the substance, not the symbols. But it still helps to have a concept of the subject matter by way of such examples.
Maybe mainstream metaethics will make more sense in flowchart form. Here's a flowchart I adapted from Miller (2003). If you don't understand the bottom-most branching, read chapter 9 of Miller's book or else just don't worry about it. (Click through for full size.)
Next post: Conceptual Analysis and Moral Theory
Previous post: Heading Toward: No-Nonsense Metaethics
Notes
1 This is not quite correct. The error theorist can hold that a statement like "Murder is not wrong" is true, for he thinks that murder is not wrong or right. Rather, the error theorist claims that all moral statements which presuppose the existence of a moral property are false, because no such moral properties exist. See Joyce (2004). Mackie (1977) is the classic statement of error theory.
2 Sturgeon (1988); Boyd (1988); Brink (1989); Brandt (1979); Railton (1986); Jackson (1998). I have written introductions to the three major versions of moral naturalism: Cornell realism, Railton's moral reductionism (1, 2), and Jackson's moral functionalism.
3 Moore (1903); McDowell (1998); Wiggins (1987).
4 For an overview of such theories, see Miller (2003), chapter 7.
5 See Carnap (1937), p. 23-25; Hare (1952).
6 Gibbard (1990).
7 Blackburn (1984).
8 The Knobe Effect. See Knobe (2003).
9 Schnall et al. (2008); Baron & Thomley (1994).
10 Young et al. (2010). I interviewed the author of this study here.
11 This is moral dumfounding. See Haidt (2001).
12 Greene (2007).
13 Zhong et al. (2010).
References
Baron & Thomley (1994). A Whiff of Reality: Positive Affect as a Potential Mediator of the Effects of Pleasant Fragrances on Task Performance and Helping. Environment and Behavior, 26(6): 766-784.
Blackburn (1984). Spreading the Word. Oxford University Press.
Brandt (1979). A Theory of the Good and the Right. Oxford University Press.
Brink (1989). Moral Realism and the Foundations of Ethics. Cambridge University Press.
Boyd (1988). How to be a Moral Realist. In Sayre-McCord (ed.), Essays in Moral Realism (pp. 181-122). Cornell University Press.
Carnap (1937). Philosophy and Logical Syntax. Kegan Paul, Trench, Trubner & Co.
Gibbard (1990). Wise Choices, Apt Feelings. Clarendon Press.
Greene (2007). The secret joke of Kant's soul. In Sinnott-Armstrong (ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development. MIT Press.
Haidt (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108: 814-834
Hare (1952). The Language of Morals. Oxford University Press.
Jackson (1998). From Metaphysics to Ethics. Oxford UniversityPress.
Joyce (2001). The Myth of Morality. Cambridge University Press.
Knobe (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63: 190-193.
Mackie (1977). Ethics: Inventing Right and Wrong. Penguin.
McDowell (1998). Mind, Value, and Reality. Harvard University Press.
Miller (2003). An Introduction to Contemporary Metaethics. Polity.
Moore (1903). Principia Ethica. Cambridge University Press.
Schnall, Haidt, Clore, & Jordan (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8): 1096-1109.
Sturgeon (1988). Moral explanations. In Sayre-McCord (ed.), Essays in Moral Realism (pp. 229-255). Cornell University Press.
Railton (1986). Moral realism. Philosophical Review, 95: 163-207.
Wiggins (1987). A sensible subjectivism. In Needs, Values, Truth (pp. 185-214). Blackwell.
Young, Camprodon, Hauser, Pascual-Leone, & Saxe (2010). Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments. Proceedings of the National Academy of Sciences, 107: 6753-6758.
Zhong, Strejcek, & Sivanathan (2010). A clean self can render harsh moral judgment. Journal of Experimental Social Psychology, 46 (5): 859-862
Follow morality.
One way to illustrate this distinction is using Eliezer's "murder pill". If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that's a definition of morality, then morality is a subset of psychology, which probably isn't what you wanted.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather tha... (read more)