In 2007, Chris Matthews of Hardball interviewed David O'steen, executive director of a pro-life organization. Matthews asked:
I have always wondered something about the pro-life movement. If you believe that killing [a fetus] is murder, why don't you bring murder charges or seek a murder penalty against a woman who has an abortion? Why do you let her off, if you really believe it's murder?1
O'steen replied that "we have never sought criminal penalties against a woman," which isn't an answer but a re-statement of the reason for the question. When pressed, he added that we don't know "how she‘s been forced into this." When pressed again, O'steen abandoned these responses and tried to give a consequentialist answer. He claimed that implementing "civil penalties" and taking away the "financial incentives" of abortion doctors would more successfully "protect unborn children."
But this still doesn't answer the question. If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why don't abortionists want to change the laws so that abortion is considered murder and a woman who has an abortion can be charged with paying a doctor to commit murder? Psychologist Robert Kurzban cites this as a classic case of moral rationalization.2
Pro-life demonstrators in Illinois were asked a similar question: "If [abortion] was illegal, should there be a penalty for the women who get abortions illegally?" None of them (on the video) thought that women who had illegal abortions should be punished as murders, an ample demonstration of moral rationalization. And I'm sure we can all think of examples where it looks like someone has settled on an intuitive moral judgment and then invented rationalizations later.3
More controversially, some have suggested that rule-based deontological moral judgments generally tend to be rationalizations. Perhaps we can even dissolve the debate between deontological intuitions and utilitarian intuitions if we can map the cognitive algorithms that produce them.
Long-time deontologists and utilitarians may already be up in arms to fight another war between Blues and Greens, but these are empirical questions. What do the scientific studies suggest?
Utilitarian and Deontological Processes
A runaway trolley is about to run over and kill five people, but you can save them by hitting a switch that will put the trolley on a side track where it will only kill one person. Do you throw the switch? When confronted with this switch dilemma, most people say it is morally good to divert the trolley,4 thereby achieving the utilitarian 'greater good'.
Now, consider the footbridge dilemma. Again, a runaway trolley threatens five people, and the only way to save them is to push a large person off a footbridge onto the tracks, which will stop the trolley but kill the person you push. (Your body is too small to stop the trolley.) Do you push the large person off the bridge? Here, most people say it's wrong to trade one life for five, allowing a deontological commitment to individual rights to trump utilitarian considerations of the greater good.
Researchers presented subjects with a variety of 'impersonal' dilemmas (including the switch dilemma) and 'up-close-and-personal' dilemmas (including the footbridge dilemma). Personal dilemmas preferentially engaged brain areas associated with emotion. Impersonal dilemmas preferentially engaged the regions of the brain associated with working memory and cognitive control.5
This suggested a dual-process theory of moral judgment, according to which the footbridge dilemma elicits a conflict between emotional intuition ("you must not push people off bridges!") and utilitarian calculation ("pushing the person off the bridge will result in the fewest deaths"). In the footbridge case, emotional intuition wins out in most people.
But now, consider the crying baby dilemma from the final episode of M.A.S.H:
It's wartime. You and your fellow villagers are hiding from nearby enemy soldiers in a basement. Your baby starts to cry, and you cover your baby's mouth to block the sound. If you remove your hand, your baby will cry loudly, and the soldiers will hear. They will find you... and they will kill all of your. If you do not remove your hand, your baby will smother to death. Is it morally acceptable to smother your baby to death in order to save yourself and the other villagers?6
Here, people take a long time to answer, and they show no consensus in their answers. If the dual-process theory of moral judgment is correct, then people considering the crying baby dilemma should exhibit increased activity in the ACC (a region associated with response conflict), and in regions associated with cognitive control (for overriding a potent emotional response with utilitarian calculation). Also, those who eventually choose the characteristically utilitarian answer (save the most lives) over the characteristically deontological answer (don't kill the baby) should exhibit comparatively more activity in brain regions associated with working memory and cognitive control. All three predictions turn out to be true.7
Moreover, patients with two different kinds of dementia or lesions that cause "emotional blunting" are disproportionately likely to approve of utilitarian action in the footbridge dilemma,8 and cognitive load manipulations that keep working memory occupied slow down utilitarian judgments but not deontological judgments.9
Studies of individual differences also seem to support the dual-process theory. Individuals who are (1) high in "need for cognition" and low in "faith in intuition", or (2) score well on the Cognitive Reflection Test, or (3) have unusually high working memory capacity... all give more utilitarian judgments.10
This leads us to Joshua Greene's bold claim:
...deontological judgments tend to be driven by emotional responses, and... deontological philosophy, rather than being grounded in moral reasoning, is to a large extent an exercise in moral rationalization. This is in contrast to consequentialism, which, I will argue, arises from rather different psychological processes, ones that are more 'cognitive,' and more likely to involve genuine moral reasoning...
[Psychologically,] deontological moral philosophy really is... an attempt to produce rational justifications for emotionally driven moral judgments, and not an attempt to reach moral conclusions on the basis of moral reasoning.11
Cognition and Emotion
Greene explains the difference between 'cognitive' and 'emotional' processes in the brain (though both involve information processing, and so are 'cognitive' in a broader sense):
...'cognitive' processes are especially important for reasoning, planning, manipulating information in working memory, controlling impulses, and 'higher executive functions' more generally. Moreover, these functions tend to be associated with certain parts of the brain, primarily the dorsolateral surfaces of the prefrontal cortex and parietal lobes... Emotion, in contrast, tends to be associated with other parts of the brain, such as the amygdala and the medial surfaces of the frontal and parietal lobes... And while the term 'emotion' can refer to stable states such as moods, here we will primarily be concerned with emotions subserved by processes that in addition to being valenced, are quick and automatic, though not necessarily conscious.
Since we are concerned with two kinds of moral judgment (deontological and consequentialist) and two kinds of neurological process (cognitive and emotional), we have four empirical possibilities:
First, it could be that both kinds of moral judgment are generally 'cognitive', as Kohlberg’s theories suggest (Kohlberg, 1971). At the other extreme, it could be that both kinds of moral judgment are primarily emotional, as Haidt’s view suggests (Haidt, 2001). Then there is the historical stereotype, according to which consequentialism is more emotional (emerging from the 'sentimentalist' tradition of David Hume (1740) and Adam Smith (1759) while deontology is more 'cognitive' [including the Kantian 'rationalist' tradition: see Kant (1785)]. Finally, there is the view for which I will argue, that deontology is more emotionally driven while consequentialism is more 'cognitive.'
We have already seen the neuroscientific evidence in favor of Greene's view. Now, let us turn to further evidence from the work of Jon Haidt.
Emotion and Deontological Judgments
Haidt & colleagues (1993) presented subjects with a sequence of harmless actions, for example:
- A son promises his dying mother that he will visit her grave every day after she has died, but then doesn’t because he is busy.
- A woman uses an old American flag to clean the bathroom.
- A family eats its dog after it has been killed accidentally by a car.
- A brother and sister kiss on the lips.
- A man masturbates using a dead chicken before cooking and eating it.
For each action, subjects were asked questions like: Is this action wrong? Why? Does it hurt anyone? If someone did this, would it bother you? Greene summarizes the results:
When people say that such actions are wrong, why do they say so? One hypothesis is that these actions are perceived as harmful, whether or not they really are... Kissing siblings could cause themselves psychological damage. Masturbating with a chicken could spread disease, etc. If this hypothesis is correct, then we would expect people’s answers to the question "Does this action hurt anyone?" to correlate with their degree of moral condemnation... Alternatively, if emotions drive moral condemnation in these cases, then we would expect people’s answers to the question "If you saw this, would it bother you?" to better predict their answers to the moral questions posed.
If you're following along, it may not surprise you that emotions seemed to be driving the deontological condemnation of harmless actions. Moreover, both education and adulthood were correlated with more consequentialist judgments. (Cognitive control of basic emotional reactions is something that develops during adolescence.12) Greene reminds us:
These... findings make sense in light of the model of moral judgment we have been developing, according to which intuitive emotional responses drive prepotent moral intuitions while 'cognitive' control processes sometimes rein them in.
But there is more direct evidence of the link between emotion and the deontological condemnation of harmless actions.
Wheatley & Haidt (2005) gathered hypnotizable subjects and gave some of them a hypnotic suggestion to feel disgust upon reading the word 'often', while giving others a hypnotic suggestion to feel disgust upon reading the word 'take'. The researchers then showed these subjects a variety of scenarios, some of them involving no harm. (For example, two second cousins have a relationship in which they "take weekend trips to romantic hotels" or else "often go on weekend trips to romantic hotels".) As expected, subjects who received the wordings they had been primed to feel disgust toward judged the couple's actions as more morally condemnable than other subjects did.
In a second experiment, Wheatley and Haidt used the same technique and had subjects respond to a scenario in which a person did nothing remotely wrong: a student "often picks" or "tries to take up" broad topics of discussion at meetings. Still, many subjects who were given the matching hypnotic suggestion rated the student's actions as morally wrong. When asked why, they invented rationalizations like "It just seems like he’s up to something" or "It just seems so weird and disgusting" or "I don’t know [why it’s wrong], it just is."
In other studies, researchers implemented a disgust condition by placing some subjects at a dirty desk or in the presence of fart spray. As before, those in the disgust condition were more likely to rate harmless actions as morally wrong than other subjects were.13
Finally, consider that the dual-process theory of moral judgment predicts that deontological judgments will be quicker than utilitarian ones, because deontological judgments use emotional and largely unconscious brain modules while utilitarian judgments require slow, conscious calculation. Suter & Hertwig (2011) presented subjects with a variety of moral dilemmas and alternatively prodded them to give their judgments quickly or take their time to deliberate thoroughly. As predicted, faster responses predicted more deontological judgments.
Summing Up
We are a species prone to emotional moral judgment, and to rationalization ('confabulation'). And, Greene writes,
What should we expect from creatures who exhibit social and moral behavior that is driven largely by intuitive emotional responses and who are prone to rationalization of their behaviors? The answer, I believe, is deontological moral philosophy...
Whether or not we can ultimately justify pushing the man off the footbridge, it will always feel wrong. And what better way to express that feeling of non-negotiable absolute wrongness than via the most central of deontological concepts, the concept of a right: You can’t push him to his death because that would be a violation of his rights.
Deontology, then, is a kind of moral confabulation. We have strong feelings that tell us in clear and uncertain terms that some things simply cannot be done and that other things simply must be done. But it is not obvious how to make sense of these feelings, and so we, with the help of some especially creative philosophers, make up a rationally appealing story: There are these things called 'rights' which people have, and when someone has a right you can’t do anything that would take it away. It doesn’t matter if the guy on the footbridge is toward the end of his natural life, or if there are seven people on the tracks below instead of five. If the man has a right, then the man has a right. As John Rawls... famously said, "Each person possesses an inviolability founded on justice that even the welfare of society as a whole cannot override"... These are applause lines because they make emotional sense.
Of course, utilitarian moral judgment is not emotionless. Emotion is probably what leads us to label harm as a 'bad' thing, for example. But utilitarian moral judgment is, as we've seen, particularly demanding of 'cognitive' processes: calculation, the weighing of competing concerns, the adding and averaging of value, and so on. Utilitarian moral judgment uses the same meso-limbic regions that track a stimulus' reward magnitude, reward probability, and expected value.14
This does not prove the case that deontological moral judgments are usually rationalizations. But many lines of converging evidence make this a decent hypothesis. And now we can draw our neural map:15
And up until March 18th of this year, Greene had a pretty compelling case for his position that deontological judgments are generally just rationalizations.
And then, Guy Kahane et al. (2011) threw Greene's theory into doubt by testing separately for the content (deontological vs. utilitarian) and the intuitiveness (intuitive vs. not-intuitive) of moral judgments. The authors summarize their results:
Previous neuroimaging studies reported that utilitarian judgments in dilemmas involving extreme harm were associated with activation in the DLPFC and parietal lobe (Greene et al., 2004). This finding has been taken as evidence that utilitarian judgment is generally driven by controlled processing (Greene, 2008). The behavioural and neural data we obtained suggest instead that differences between utilitarian and deontological judgments in dilemmas involving extreme harm largely reflect differences in intuitiveness rather than in content.
...When we controlled for content, these analyses showed considerable overlap for intuitiveness. In contrast, when we controlled for intuitiveness, only littleif anyoverlap was found for content. Our results thus speak against the influential interpretation of previous neuroimaging studies as supporting a general association between deontological judgment and automatic processing, and between utilitarian judgment and controlled processing.
[This evidence suggests...] that behavioural and neural differences in responses to such dilemmas are largely due to differences in intuitiveness, not to general differences between utilitarian and deontological judgment.
So we'll have to wait for more studies to unravel the mystery of whether deontological moral judgments are generally rationalizations.
By email, Greene told me he suspected Kahane's 'alternative theory' wasn't much of an alternative to what he (Greene) was proposing in the first place. In his paper, Greene discussed the passage where Kant says it's wrong to lie to prevent a madman from killing someone, and cites this as an example of a case in which a deontological judgment might be more controlled, while the utilitarian judgment is more automatic. Greene's central claim is that when there's a conflict between rights and duties on the one hand, and promoting the greater good on the other, it's typically controlled cognition on the utilitarian side and emotional intuition on the other.
Update: Greene's full reply to Kahane et al. is now available.
But even if Greene's theory is right, humans may still need to use deontological rules because we run on corrupted hardware.
Notes
1 Hardball for November 13, 2007. Here is the transcript.
2 Kurzban (2011), p. 193.
3 Also see Jon Haidt's unpublished manuscript on moral dumfounding, and Hirstein (2005).
4 Petrinovich et al. (1993); Petrinovich & O’Neill (1996).
5 Greene et al. (2001, 2004).
6 Greene (2009).
7 Greene et al. (2004).
8 Mendez et al. (2005); Koenigs et al. (2007); Ciaramelli et al. (2007).
9 Greene et al. (2008).
10 Bartels (2008); Hardman (2008); Moore et al. (2008).
11 The rest of the Joshua Greene quotes from this article are from Greene (2007).
12 Anderson et al. (2001); Paus et al. (1999); Steinburg & Scott (2003).
13 Schnall et al. (2004); Baron & Thomley (1994).
14 See Cushman et al. (2010).
15 From Greene (2009).
References
Anderson, Anderson, Northam, Jacobs, & Catroppa (2001). Development of executive functions through late childhood and adolescence in an Australian sample. Developmental Neuropsychology, 20: 385-406.
Baron & Thomley (1994). A Whiff of Reality: Positive Affect as a Potential Mediator of the Effects of Pleasant Fragrances on Task Performance and Helping. Environment and Behavior, 26: 766-784.
Bartels (2008). Principled moral sentiment and the flexibility of moral judgment and decision making. Cognition, 108: 381-417.
Ciaramelli, Muccioli, Ladavas, & di Pellegrino (2007). Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex. Social Cognitive and Affective Neuroscience, 2: 84-92.
Cushman, Young, & Greene (2010). Multi-system moral psychology. In Doris (ed.), The Moral Psychology Handbook (pp. 47-71). Oxford University Press.
Greene, Sommerville, Nystrom Darley, & Cohen (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293: 2105-2108.
Greene, Nystrom, Engell, Darley, & Cohen (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44: 389-400.
Greene (2007). The secret joke of Kant's soul. In Sinnott-Armstrong (ed.), Moral Psychology Vol. 3: The Neuroscience of Morality (pp. 35-79). MIT Press.
Greene, Morelli, Lowenberg, Nystrom, & Cohen (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107: 1144-1154.
Greene (2009). The cognitive neuroscience of moral judgment. In Gazzaniga (ed.), The Cognitive Neurosciences, Fourth Edition (pp. 987–999). MIT Press.
Haidt, Koller, & Dias (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology 65: 613-628.
Hardman (2008). Moral dilemmas: Who makes utilitarian choices. In Hare (ed.), Hare Psychopathy Checklist--Revised (PCL-R): 2nd Edition. Multi-Health Systems, Inc.
Haidt (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108: 814-834.
Hirstein (2005). Brain Fiction: Self-Deception and the Riddle of Confabulation. MIT Press.
Hume (1740). A Treatise of Human Nature.
Kahane, Wiech, Shackel, Farias, Savulescu, & Tracey (2011). The neural basis of intuitive and counterintuitive moral judgment. Social Cognitive & Affective Neuroscience.
Kant (1785). Groundwork of the Metaphysics of Morals.
Koenigs, Young, Cushman, Adolphs, Tranel, Damasio, & Hauser (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446: 908–911.
Kohlberg (1971). From is to ought: How to commit the naturalistic fallacy and get away with it in the study of moral development. In Mischel (ed.), Cognitive development and epistemology (pp. 151–235). Academic Press.
Kurzban (2011). Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. Princeton University Press.
Mendez, Anderson, & Shapira (2005). An investigation of moral judgment in fronto-temporal dementia. Cognitive and Behavioral Neurology, 18: 193–197.
Moore, Clark, & Kane (2008). Who shalt not kill?: Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science, 19: 549-557.
Paus, Zijdenbos, Worsley, Collins, Blumenthal, Giedd, Rapoport, & Evans (1999). Structural maturation of neural pathways in children and adolescents: In vivo study. Science, 283: 1908-1911.
Petrinovich, O'Neill, Jorgensen (1993). An empirical study of moral intuitions: Toward an evolutionary ethics. Journal of Personality and Social Psychology, 64: 467-478.
Petrinovich & O’Neill (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology, 17: 145-171.
Schnall, Haidt, & Clore (2004). Irrelevant disgust makes moral judgment more severe, for those who listen to their bodies. Unpublished manuscript.
Smith (1759). The Theory of Moral Sentiments.
Steinburg & Scott (2003). Less guilty by reason of adolescence: Developmental immaturity, diminished responsibility, and the juvenile death penalty. American Psychologist, 58: 1009-1018.
Suter & Hertwig (2011). Time and moral judgment. Cognition, 119: 454-458.
Valdesolo & DeSteno (2006). Manipulations of emotional context shape moral judgment. Psychological Science, 17: 476-477.
Wheatley & Haidt (2005). Hypnotically induced disgust makes moral judgments more severe. Psychological Science, 16: 780-784.
A recent study by folks at the Oxford Centre for Neuroethics suggests that Greene et. al.'s results are better explained by appeal to differences in how intuitive/counterintuitive a moral judgment is, rather than differences in how utilitarian/deontological it is. I had a look at the study, and it seems reasonably legit, but I don't have any expertise in neuroscience. As I understand it, their findings suggest that the "more cognitive" part of the brain gets recruited more when making a counterintuitive moral judgment, whether utilitarian or deontological.
Also, it is worth noting that attempts to replicate the differences in response times have failed (this was the result with the Oxford Center for Neuroethics study as well).
Here is an abstract:
An important quote from the study:
Where to find the study (subscription only):
Kahane, G., K. Wiech, N. Shackel, M. Farias, J. Savulescu and I. Tracey, ‘The Neural Basis of Intuitive and Counterintuitive Moral Judgement’, forthcoming in Social, Cognitive and Affective Neuroscience.
Link on Guy Kahane's website: http://www.philosophy.ox.ac.uk/members/research_staff/guy_kahane
Update: Greene's reply to Kahane et al. is here.