I used to think I was a very firm deontologist, but that was mainly because I didn't want ethical rules to be bent willy-nilly to maximize something simple like "number of lives saved." I didn't, for example, want torture to be legal. I wanted to live in a world with "rights" -- that is, ethical rules that ought not to be broken even when the circumstances change, for all possible circumstances with non-negligible probability. You don't want to live in a world where people are constantly reconsidering "Hm, is it worth it at this moment to not steal Sarah's property?" You want to live in a world where people understand that stealing is wrong and that's that. You want some rigidity.
I think a lot of self-identified deontologists think along these lines. They associate utilitarianism with "the greatest good for the greatest number," and then imagine things like "it is for the good of this great Nation that you be drafted to dig ditches this year" and they shudder.
That shudder isn't necessarily a "confabulation." The reason you shudder at the thought of a moral rule to "maximize utility" is that there is no d...
This sounds like two-tier consequentialism -- "as it happens, when you take second- and third- and fourth- order consequences into account, the utility-maximizing course looks a hell of a lot like respecting some set of inherent rights of individuals"
I've sometimes thought of deontological rules as something like a sanity check on utilitarian reasoning.
If, as you are reasoning your way to maximum utility, you come up with a result that ends, "... therefore, I should kill a lot of innocent people," or for that matter "... therefore, I'm justified in scamming people out of their life savings to get the resources I need," the role of deontological rules against murder or cheating is to make you at least stop and think about it really hard. And, almost certainly, find a hole in your reasoning.
It is imaginable — I wouldn't say likely — that there are "universal moral laws" for human beings, which take the following form: "If you come to the conclusion 'Utility is maximized if I murder these innocent people', then it is more likely that your human brain has glitched and failed to reason correctly, than that your conclusion is correct." In other words, the probability of a positive-utility outcome from murder is less than the probability of erroneous reasoning leading to the belief in that outcome.
A consequence of this is that the better predictor you are, the more things can be moral for you to do if you conclude they maximize utility. It is imaginable that no human can with <50% probability of error arrive at the conclusion "I should push that fat guy in front of the trolley", but that some superhuman predictor could.
I think this whole "utilitarian vs. deontological" setup is a misleading false dichotomy. In reality, the way people make moral judgments -- and I'd also say, any moral system that is really usable in practice -- is best modeled neither by utilitarianism nor by deontology, but by virtue ethics.
All of the puzzles listed in this article are clarified once we realize that when people judge whether an act is moral, they ask primarily what sort of person would act that way, and consequently, whether they want to be (or be seen as) this sort of person and how people of this sort should be dealt with. Of course, this judgment is only partly (and sometimes not at all) in the form of conscious deliberation, but from an evolutionary and game-theoretical perspective, it's clear why the unconscious processes would have evolved to judge things from that viewpoint. (And also why their judgment is often covered in additional rationalizations at the conscious level.)
The "fat man" variant of the trolley problem is a good illustration. Try to imagine someone who actually acts that way in practice, i.e. who really goes ahead and kills in cold blood when convinced by utilitarian ...
I'd also add that when it comes to rationalizations, utilitarians should be the last ones to throw stones. In practice, utilitarianism has never been much more than a sophisticated framework for constructing rationalizations for ideological positions on questions where correct utilitarian answers are at worst just undefined, and at best wildly intractable to calculate. (As is the case for pretty much all questions of practical interest.)
The phenomenon of utilitarianism serving as a sophisticated framework for constructing rationalizations for ideological positions exists and is perhaps generic. But there's an analogous phenomenon of virtue ethics being rhetorically (think about both sides of the abortion debate). I strongly disagree that utilitarianism is ethically useless in practice. Do you disagree that VillageReach's activity has higher utilitarian expected value per dollar than that of the Make A Wish Foundation?
Yes, there are plenty of situations where game theoretic dynamics and coordination problems make utilitarian style analysis useless, but your claim seems overly broad and sweeping.
So I guess the takeaway is that if you care more about your status as a predictable, cooperative, and non-threatening person than about four innocent lives, don't push the fat man.
http://lesswrong.com/lw/v2/prices_or_bindings/
(Also, please try to avoid sentences like "if you care about X more than innocent lives" — that comes across to me as sarcastic moral condemnation and probably tends to emotionally trigger people.)
It's not just about what status you have, but what you actually are. You can view it as analogous to the Newcomb problem, where the predictor/Omega is able to model you accurately enough to predict if you're going to take one or two boxes, and there's no way to fool him into believing you'll take one and then take both. Similarly, your behavior in one situation makes it possible to predict your behavior in other situations, at least with high statistical accuracy, and humans actually have some Omega-like abilities in this regard. If you kill the fat man, this predicts with high probability that you will be non-cooperative and threatening in other situations. This is maybe not necessarily true in the space of all possible minds, but it is true in the space of human minds -- and it's this constraint that gives humans these limited Omega-like abilities for predicting each others' behavior.
(Of course, in real life this is further complicated by all sorts of higher-order strategies that humans employ to outsmart each other, both consciously and unconsciously. But when it comes to the fundamental issues like the conditions under which deadly violence is expected, things are usually simp...
I don't mean to imply that the kind of person who would kill the fat man would also kill for profit. The only observation that's necessary for my argument is that killing the fat man -- by which I mean actually doing so, not merely saying you'd do so -- indicates that the decision algorithms in your brain are sufficiently remote from the human standard that you can no longer be trusted to behave in normal, cooperative, and non-dangerous ways. (Which is then correctly perceived by others when they consider you scary.)
Now, to be more precise, there are actually two different issues there. The first is whether pushing the fat man is compatible with otherwise cooperative and benevolent behavior within the human mind-space. (I'd say even if it is, the latter is highly improbable given the former.) The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds. That's an extremely deep and complicated problem of game and decision theory, which is absolutely crucial for the future problems of artificial minds and human self-modification, but has little bearing on the contemporary problems of ideology, ethics, etc.
A recent study by folks at the Oxford Centre for Neuroethics suggests that Greene et. al.'s results are better explained by appeal to differences in how intuitive/counterintuitive a moral judgment is, rather than differences in how utilitarian/deontological it is. I had a look at the study, and it seems reasonably legit, but I don't have any expertise in neuroscience. As I understand it, their findings suggest that the "more cognitive" part of the brain gets recruited more when making a counterintuitive moral judgment, whether utilitarian or deontological.
Also, it is worth noting that attempts to replicate the differences in response times have failed (this was the result with the Oxford Center for Neuroethics study as well).
Here is an abstract:
...Neuroimaging studies on moral decision-making have thus far largely focused on differences between moral judgments with opposing utilitarian (well-being maximizing) and deontological (duty-based) content. However, these studies have investigated moral dilemmas involving extreme situations, and did not control for two distinct dimensions of moral judgment: whether or not it is intuitive (immediately compelling to most people) an
A small nitpick, and without having read the other comments, so please excuse me if this has been mentioned before.
The 5 actions listed under the heading "Emotion and Deontological Judgments" squick me. But they don't disgust me.
The concept of the "squick" differs from the concept of "disgust" in that "squick" refers purely to the physical sensation of repulsion, and does not imply a moral component.
Stating that something is "disgusting" implies a judgement that it is bad or wrong. Stating that something "squicks you" is merely an observation of your reaction to it, but does not imply a judgement that such a thing is universally wrong.
It may be useful to add this to our collective vocabulary. Some might argue it's adding unnecessary labels to too-similar a concept, but I think the distinction is useful.
Please, let me know if something like this has been explored already?
As expected, subjects who received the wordings they had been primed to feel disgust toward judged the couple's actions as more morally condemnable than other subjects did.
I would just like to point out that this seems like fantastic training material for Rationalist Boot Camp and related projects.
Is your studied, practiced, meticulously crafted rationality enough to overcome these really dumb post-hypnotic suggestions? Surely if you can't convince yourself that your moral disgust is irrational in clear cut situations like these, your chances of tackling your own biases in more complex and emotionally charged issues are pretty slim.
Obviously there's some disclaimer to be attached when talking about hypnosis, but still it seems like a hell of a starting point.
Indeed, it may turn out to be the case that we can dissolve the debate between deontological intuitions and utilitarian intuitions if we can map the cognitive algorithms that produce them.
Suppose it's an empirical fact that when people engage in consequentialist-type cognition, they typically use a model of the world that is ontologically crazy (for example, one with irreducible mental entities). Would that be an argument against consequentialism in general? In one sense it is, since it means that we can't straightforwardly translate naive consequentialism into a correct moral philosophy, so the consequentialist approach to moral philosophy is at least more difficult than it might first appear. But surely this empirical fact would not "dissolve" the debate with the conclusion that no form of consequentialism can be right, and therefore the whole approach should be abandoned.
Similarly, I suggest that empirical facts about how people typically form deontological moral judgements can't dissolve the debate between consequentialism vs deontology. A deontologist could still claim, for example, that while the typical deontological rules people naively come up with to explain ...
If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why shouldn't she be convicted of that?
Suppose I believe that soldiers killing others in wartime is murder. Can you think of a reason why I wouldn't press criminal charges? I can. Because it's not illegal. Criminal charges aren't how we punish each other for moral infractions if they don't happen to also be against the law.
the deontological answer (don't kill the baby)
The situation as you presented it admits of nuances that do not make this "the deontological answer". I'm personally inclined to declare babies non-persons anyway, and your scenario even paints the baby as unsalvageable (if your group is found, it will die with the rest of you). This does not make me a non-deontologist. If you made it a screaming eight-year-old too hysterical to shut up; and specified that he or she alone would be safe from the enemy should we be found; and for some reason blocked the option of just knocking the kid out or gagging him or her; then I would be in more of a pickle - but please do note that there is some subtlety here. Deontologists do not just prohibit ...
Luke, I think you often come across as defensive. I think it is difficult to avoid since you write a lot and thus put yourself out there for people to criticize and people do often comment in an aggressive fashion, but I think you should be aware of it anyway. I think avoiding seeming defensive would be useful to you because seeming defensive seems to make discussions more adversarial.
The phrase that gives me that impression here is
So now your objection is to my tone? You've reached DH2 on the disagreement heirarchy. I'll take another look at my tone, but it's not much of a disagreement if we're disagreeing about tone.
I am a neutral observer of this conversation; I've only read the last two comments.
Thanks for your feedback. For whatever reason, this turned out to be one of the most impacting comments I've received this year.
Wow. I've been guilty of this for a while, and not realized it. That "is this action morally wrong" question really struck me.
Myself, I believe that there is an objective morality outside humanity, one that is, as Eliezer would deride the idea, "written on a stone tablet somewhere". This may be an unpopular hypothesis, but accepting it is not a prerequisite for my point. When asked about why certain actions were immoral, I, too, have reached for the "because it harms someone" explanation... an explanation which I just now see as the sin of Avoiding Your Belief's Real Weak Points.
What I really believe, upon much reflection, is that there are two overlapping, yet distinct, classes of "wrong" actions: one we might term "sins", and the other we might term "social transgressions". Social Transgressions is that class of acts which are punishable by society, usually those that are harmful. Sins is that class of acts which goes against this Immutable Moral Law. Examples are given below, being (in the spirit of full disclosure) the first examples I thought of, and neither the more pure examples, nor the most defensible, non-controv...
Voted up for thinking about the problem, self-honesty, and more importantly for speaking up. (I don't quite understand whence the downvotes... just screaming "Boo!" at outgroup beliefs?) [Edit: at the time of this comment, the parent was at -5.]
It seems to me that by "sin" you just mean things that make you go "Squick!". Why do you expect that, if we found the relevant stone tablet, it wouldn't read "Spitting on the floor is wrong. Ew, tuberculosis.", nor "Maximise your score at Tetris.", but "Homosexuality is wrong."?
I'm really having trouble not snickering as I write this. I literally cannot empathise with "Homosexuality is wrong". I can sorta picture "Gay sex? Squick!", but the obvious followup is "Squick isn't a good criterion", not "Homosexuality is wrong". Also, pray tell, what (rather, whom) should genderqueers do?
But now, consider the crying baby dilemma from the final episode of M.A.S.H:
I want to point out these concluding paragraphs from Greene's "The secret joke of Kant's soul":
...Taking these arguments seriously, however, threatens to put us on a second slippery slope (in addition to the one leading to altruistic destitution): How far can the empirical debunking of human moral nature go? If science tells me that I love my children more than other children only because they share my genes (Hamilton, 1964), should I feel uneasy about loving them extra? If science tells me that I am nice to other people only because a disposition to be nice ultimately helped my ancestors spread their genes (Trivers, 1971), should I stop being nice to people? If I care about myself only because I am biologically programmed to carry my genes into the future, should I stop caring about myself? It seems that one who is unwilling to act on human tendencies that have amoral evolutionary causes is ultimately unwilling to be human. Where does one draw the line between correcting the nearsightedness of human moral nature and obliterating it completely?
This, I believe, is among the most fundamental moral questions we face in an age of growing scientific self-knowledge, and I will n
If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why shouldn't she be convicted of that?
I am not convinced that this post needed to introduce the added complications of legality. It adds another plane of variables under dispute and is not too illuminating.
But this still doesn't answer the question. If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why don't abortionists want to change the laws so that abortion is considered murder and a woman who has an abortion can be charged with paying a doctor to commit murder? Psychologist Robert Kurzban cites this as a classic case of moral rationalization.
IAWYC, but the obvious alternative explanation in this example is that the person in question does believe that killing a fetus is murder and that the...
I'm having trouble with this post.
First I was like wha because I didn't see a clear way for a judgment to be a rationalization. It took me awhile to figure out what was meant. If anyone else happens to be similarly confused, Greene's explanation is: "Deontology, then, is a kind of moral confabulation. We have strong feelings that tell us in clear and uncertain terms that some things simply cannot be done and that other things simply must be done. But it is not obvious how to make sense of these feelings, and so we, with the help of some especially cre...
My guess is that the appropriate way to dissolve the conflict between utilitarian and deontological moral philosophy is to see deontological rules as heuristics. I think we could design an experiment in which utilitarians get emotional and inconsistent, and deontologists come off as the sober thinkers, just by making it a situation where adoption of a simple consistent heuristic is superior to the attempt to weigh up unknown probabilities and unknown bads.
Possibly relevant; "The Price of Your Soul: Neural Evidence for the Non-Utilitarian Representation of Sacred Values ":
...Deontological theory suggests that sacred values are processed based on rights and wrongs irrespective of outcomes, while utilitarian theory suggests they are processed based on costs and benefits of potential outcomes, but which mode of processing an individual naturally uses is unknown. The study of decisions over sacred values is difficult because outcomes cannot typically be realized in a laboratory, and hence little is know
I don't see how your conclusion follows from your data. I could just as easily use the same model to argue that our morality is deontological and it is the utilitarian judgements that mere moral rationalizations.
I have observed that utilitarians will attempt to fudge the numbers to make the utility calculations come out the way they "should" inventing large amounts of anti-epistemology in the process (see the current debate on race and intelligence for an example of this process in action). A better approach might be to admit our morals are partially deontological and that certain things are wrong no matter how the calculations come out.
While I can certainly say that the Greene's assertion that deontological ethics is shrouded in rationalizations (is this a fair summary?) rings true to me, I'd reserve judgement until I see a blind study showing that utilitarian or pragmatic ethics can be experimentally distinguished from the deontological one based on some unambiguous rationalization quotient.
I suspect that if we dig deep enough, we find Kant's deontological moral imperatives in any ethics. The rules themselves certainly depend on the ethical system. For example, EY clearly believes in a ...
Studies of individual differences also seem to support the dual-process theory. Individuals who are (1) high in "need for cognition" and low in "faith in intuition", or (2) score well on the Cognitive Reflection Test, or (3) have unusually high working memory capacity... all give more utilitarian judgments.10
This sounds like it would predict gender differences in responses. I'm guessing such utilitarian vs deontological differences are observed in the dilemmas?
Update: Joshua Greene & company have published a reply to Kahane et al. (2011).
Also, Greene's Moral Tribes is quite good.
You quote Greene as writing, "We have strong feelings that tell us in clear and uncertain terms that some things simply cannot be done and that other things simply must be done."
Shouldn't uncertain read certain?
In 2007, Chris Matthews of Hardball interviewed David O'steen, executive director of a pro-life organization. Matthews asked:
O'steen replied that "we have never sought criminal penalties against a woman," which isn't an answer but a re-statement of the reason for the question. When pressed, he added that we don't know "how she‘s been forced into this." When pressed again, O'steen abandoned these responses and tried to give a consequentialist answer. He claimed that implementing "civil penalties" and taking away the "financial incentives" of abortion doctors would more successfully "protect unborn children."
But this still doesn't answer the question. If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why don't abortionists want to change the laws so that abortion is considered murder and a woman who has an abortion can be charged with paying a doctor to commit murder? Psychologist Robert Kurzban cites this as a classic case of moral rationalization.2
Pro-life demonstrators in Illinois were asked a similar question: "If [abortion] was illegal, should there be a penalty for the women who get abortions illegally?" None of them (on the video) thought that women who had illegal abortions should be punished as murders, an ample demonstration of moral rationalization. And I'm sure we can all think of examples where it looks like someone has settled on an intuitive moral judgment and then invented rationalizations later.3
More controversially, some have suggested that rule-based deontological moral judgments generally tend to be rationalizations. Perhaps we can even dissolve the debate between deontological intuitions and utilitarian intuitions if we can map the cognitive algorithms that produce them.
Long-time deontologists and utilitarians may already be up in arms to fight another war between Blues and Greens, but these are empirical questions. What do the scientific studies suggest?
Utilitarian and Deontological Processes
A runaway trolley is about to run over and kill five people, but you can save them by hitting a switch that will put the trolley on a side track where it will only kill one person. Do you throw the switch? When confronted with this switch dilemma, most people say it is morally good to divert the trolley,4 thereby achieving the utilitarian 'greater good'.
Now, consider the footbridge dilemma. Again, a runaway trolley threatens five people, and the only way to save them is to push a large person off a footbridge onto the tracks, which will stop the trolley but kill the person you push. (Your body is too small to stop the trolley.) Do you push the large person off the bridge? Here, most people say it's wrong to trade one life for five, allowing a deontological commitment to individual rights to trump utilitarian considerations of the greater good.
Researchers presented subjects with a variety of 'impersonal' dilemmas (including the switch dilemma) and 'up-close-and-personal' dilemmas (including the footbridge dilemma). Personal dilemmas preferentially engaged brain areas associated with emotion. Impersonal dilemmas preferentially engaged the regions of the brain associated with working memory and cognitive control.5
This suggested a dual-process theory of moral judgment, according to which the footbridge dilemma elicits a conflict between emotional intuition ("you must not push people off bridges!") and utilitarian calculation ("pushing the person off the bridge will result in the fewest deaths"). In the footbridge case, emotional intuition wins out in most people.
But now, consider the crying baby dilemma from the final episode of M.A.S.H:
Here, people take a long time to answer, and they show no consensus in their answers. If the dual-process theory of moral judgment is correct, then people considering the crying baby dilemma should exhibit increased activity in the ACC (a region associated with response conflict), and in regions associated with cognitive control (for overriding a potent emotional response with utilitarian calculation). Also, those who eventually choose the characteristically utilitarian answer (save the most lives) over the characteristically deontological answer (don't kill the baby) should exhibit comparatively more activity in brain regions associated with working memory and cognitive control. All three predictions turn out to be true.7
Moreover, patients with two different kinds of dementia or lesions that cause "emotional blunting" are disproportionately likely to approve of utilitarian action in the footbridge dilemma,8 and cognitive load manipulations that keep working memory occupied slow down utilitarian judgments but not deontological judgments.9
Studies of individual differences also seem to support the dual-process theory. Individuals who are (1) high in "need for cognition" and low in "faith in intuition", or (2) score well on the Cognitive Reflection Test, or (3) have unusually high working memory capacity... all give more utilitarian judgments.10
This leads us to Joshua Greene's bold claim:
Cognition and Emotion
Greene explains the difference between 'cognitive' and 'emotional' processes in the brain (though both involve information processing, and so are 'cognitive' in a broader sense):
Since we are concerned with two kinds of moral judgment (deontological and consequentialist) and two kinds of neurological process (cognitive and emotional), we have four empirical possibilities:
We have already seen the neuroscientific evidence in favor of Greene's view. Now, let us turn to further evidence from the work of Jon Haidt.
Emotion and Deontological Judgments
Haidt & colleagues (1993) presented subjects with a sequence of harmless actions, for example:
For each action, subjects were asked questions like: Is this action wrong? Why? Does it hurt anyone? If someone did this, would it bother you? Greene summarizes the results:
If you're following along, it may not surprise you that emotions seemed to be driving the deontological condemnation of harmless actions. Moreover, both education and adulthood were correlated with more consequentialist judgments. (Cognitive control of basic emotional reactions is something that develops during adolescence.12) Greene reminds us:
But there is more direct evidence of the link between emotion and the deontological condemnation of harmless actions.
Wheatley & Haidt (2005) gathered hypnotizable subjects and gave some of them a hypnotic suggestion to feel disgust upon reading the word 'often', while giving others a hypnotic suggestion to feel disgust upon reading the word 'take'. The researchers then showed these subjects a variety of scenarios, some of them involving no harm. (For example, two second cousins have a relationship in which they "take weekend trips to romantic hotels" or else "often go on weekend trips to romantic hotels".) As expected, subjects who received the wordings they had been primed to feel disgust toward judged the couple's actions as more morally condemnable than other subjects did.
In a second experiment, Wheatley and Haidt used the same technique and had subjects respond to a scenario in which a person did nothing remotely wrong: a student "often picks" or "tries to take up" broad topics of discussion at meetings. Still, many subjects who were given the matching hypnotic suggestion rated the student's actions as morally wrong. When asked why, they invented rationalizations like "It just seems like he’s up to something" or "It just seems so weird and disgusting" or "I don’t know [why it’s wrong], it just is."
In other studies, researchers implemented a disgust condition by placing some subjects at a dirty desk or in the presence of fart spray. As before, those in the disgust condition were more likely to rate harmless actions as morally wrong than other subjects were.13
Finally, consider that the dual-process theory of moral judgment predicts that deontological judgments will be quicker than utilitarian ones, because deontological judgments use emotional and largely unconscious brain modules while utilitarian judgments require slow, conscious calculation. Suter & Hertwig (2011) presented subjects with a variety of moral dilemmas and alternatively prodded them to give their judgments quickly or take their time to deliberate thoroughly. As predicted, faster responses predicted more deontological judgments.
Summing Up
We are a species prone to emotional moral judgment, and to rationalization ('confabulation'). And, Greene writes,
Of course, utilitarian moral judgment is not emotionless. Emotion is probably what leads us to label harm as a 'bad' thing, for example. But utilitarian moral judgment is, as we've seen, particularly demanding of 'cognitive' processes: calculation, the weighing of competing concerns, the adding and averaging of value, and so on. Utilitarian moral judgment uses the same meso-limbic regions that track a stimulus' reward magnitude, reward probability, and expected value.14
This does not prove the case that deontological moral judgments are usually rationalizations. But many lines of converging evidence make this a decent hypothesis. And now we can draw our neural map:15
And up until March 18th of this year, Greene had a pretty compelling case for his position that deontological judgments are generally just rationalizations.
And then, Guy Kahane et al. (2011) threw Greene's theory into doubt by testing separately for the content (deontological vs. utilitarian) and the intuitiveness (intuitive vs. not-intuitive) of moral judgments. The authors summarize their results:
So we'll have to wait for more studies to unravel the mystery of whether deontological moral judgments are generally rationalizations.
By email, Greene told me he suspected Kahane's 'alternative theory' wasn't much of an alternative to what he (Greene) was proposing in the first place. In his paper, Greene discussed the passage where Kant says it's wrong to lie to prevent a madman from killing someone, and cites this as an example of a case in which a deontological judgment might be more controlled, while the utilitarian judgment is more automatic. Greene's central claim is that when there's a conflict between rights and duties on the one hand, and promoting the greater good on the other, it's typically controlled cognition on the utilitarian side and emotional intuition on the other.
Update: Greene's full reply to Kahane et al. is now available.
But even if Greene's theory is right, humans may still need to use deontological rules because we run on corrupted hardware.
Notes
1 Hardball for November 13, 2007. Here is the transcript.
2 Kurzban (2011), p. 193.
3 Also see Jon Haidt's unpublished manuscript on moral dumfounding, and Hirstein (2005).
4 Petrinovich et al. (1993); Petrinovich & O’Neill (1996).
5 Greene et al. (2001, 2004).
6 Greene (2009).
7 Greene et al. (2004).
8 Mendez et al. (2005); Koenigs et al. (2007); Ciaramelli et al. (2007).
9 Greene et al. (2008).
10 Bartels (2008); Hardman (2008); Moore et al. (2008).
11 The rest of the Joshua Greene quotes from this article are from Greene (2007).
12 Anderson et al. (2001); Paus et al. (1999); Steinburg & Scott (2003).
13 Schnall et al. (2004); Baron & Thomley (1994).
14 See Cushman et al. (2010).
15 From Greene (2009).
References
Anderson, Anderson, Northam, Jacobs, & Catroppa (2001). Development of executive functions through late childhood and adolescence in an Australian sample. Developmental Neuropsychology, 20: 385-406.
Baron & Thomley (1994). A Whiff of Reality: Positive Affect as a Potential Mediator of the Effects of Pleasant Fragrances on Task Performance and Helping. Environment and Behavior, 26: 766-784.
Bartels (2008). Principled moral sentiment and the flexibility of moral judgment and decision making. Cognition, 108: 381-417.
Ciaramelli, Muccioli, Ladavas, & di Pellegrino (2007). Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex. Social Cognitive and Affective Neuroscience, 2: 84-92.
Cushman, Young, & Greene (2010). Multi-system moral psychology. In Doris (ed.), The Moral Psychology Handbook (pp. 47-71). Oxford University Press.
Greene, Sommerville, Nystrom Darley, & Cohen (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293: 2105-2108.
Greene, Nystrom, Engell, Darley, & Cohen (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44: 389-400.
Greene (2007). The secret joke of Kant's soul. In Sinnott-Armstrong (ed.), Moral Psychology Vol. 3: The Neuroscience of Morality (pp. 35-79). MIT Press.
Greene, Morelli, Lowenberg, Nystrom, & Cohen (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107: 1144-1154.
Greene (2009). The cognitive neuroscience of moral judgment. In Gazzaniga (ed.), The Cognitive Neurosciences, Fourth Edition (pp. 987–999). MIT Press.
Haidt, Koller, & Dias (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology 65: 613-628.
Hardman (2008). Moral dilemmas: Who makes utilitarian choices. In Hare (ed.), Hare Psychopathy Checklist--Revised (PCL-R): 2nd Edition. Multi-Health Systems, Inc.
Haidt (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108: 814-834.
Hirstein (2005). Brain Fiction: Self-Deception and the Riddle of Confabulation. MIT Press.
Hume (1740). A Treatise of Human Nature.
Kahane, Wiech, Shackel, Farias, Savulescu, & Tracey (2011). The neural basis of intuitive and counterintuitive moral judgment. Social Cognitive & Affective Neuroscience.
Kant (1785). Groundwork of the Metaphysics of Morals.
Koenigs, Young, Cushman, Adolphs, Tranel, Damasio, & Hauser (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446: 908–911.
Kohlberg (1971). From is to ought: How to commit the naturalistic fallacy and get away with it in the study of moral development. In Mischel (ed.), Cognitive development and epistemology (pp. 151–235). Academic Press.
Kurzban (2011). Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. Princeton University Press.
Mendez, Anderson, & Shapira (2005). An investigation of moral judgment in fronto-temporal dementia. Cognitive and Behavioral Neurology, 18: 193–197.
Moore, Clark, & Kane (2008). Who shalt not kill?: Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science, 19: 549-557.
Paus, Zijdenbos, Worsley, Collins, Blumenthal, Giedd, Rapoport, & Evans (1999). Structural maturation of neural pathways in children and adolescents: In vivo study. Science, 283: 1908-1911.
Petrinovich, O'Neill, Jorgensen (1993). An empirical study of moral intuitions: Toward an evolutionary ethics. Journal of Personality and Social Psychology, 64: 467-478.
Petrinovich & O’Neill (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology, 17: 145-171.
Schnall, Haidt, & Clore (2004). Irrelevant disgust makes moral judgment more severe, for those who listen to their bodies. Unpublished manuscript.
Smith (1759). The Theory of Moral Sentiments.
Steinburg & Scott (2003). Less guilty by reason of adolescence: Developmental immaturity, diminished responsibility, and the juvenile death penalty. American Psychologist, 58: 1009-1018.
Suter & Hertwig (2011). Time and moral judgment. Cognition, 119: 454-458.
Valdesolo & DeSteno (2006). Manipulations of emotional context shape moral judgment. Psychological Science, 17: 476-477.
Wheatley & Haidt (2005). Hypnotically induced disgust makes moral judgments more severe. Psychological Science, 16: 780-784.