Recently I summarized Joshua Greene's attempt to 'explain away' deontological ethics by revealing the cognitive algorithms that generate deontological judgments and showing that the causes of our deontological judgments are inconsistent with normative principles we would endorse.

Mark Alfano has recently done the same thing with virtue ethics (which generally requires a fairly robust theory of character trait possession) in his March 2011 article on the topic: 

I discuss the attribution errors, which are peculiar to our folk intuitions about traits. Next, I turn to the input heuristics and biases, which — though they apply more broadly than just to reasoning about traits — entail further errors in our judgments about trait-possession. After that, I discuss the processing heuristics and biases, which again apply more broadly than the attribution errors but are nevertheless relevant to intuitions about traits... I explain what the biases are, cite the relevant authorities, and draw inferences from them in order to show their relevance to the dialectic about virtue ethics. At the end of the article, I evaluate knowledge-claims about virtues in light of these attribution biases, input heuristics and biases, and processing heuristics and biases. Every widely accepted theory of knowledge must reject such knowledge-claims when they are based merely on folk intuitions.

An overview of the 'situationist' attack on character trait possession can be found in Doris' book Lack of Character.

New to LessWrong?

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 12:19 AM

Alfano says:

At the end of the article, I evaluate knowledge-claims about virtues in light of these attribution biases, input heuristics and biases, and processing heuristics and biases. Every widely accepted theory of knowledge must reject such knowledge-claims when they are based merely on folk intuitions.

This sounds absurd on its face. If Alfano finds out that someone has a history of cheating and stealing, will he avoid having any business with this person, expecting similar behavior in the future, or will he "reject such knowledge-claims... based merely on folk intuitions"?

Are his claims really so silly, or am I missing something?

If a person's history is to cheat in business, it might be that the person habitually and easily lies whenever on the phone and he or she can't see who is on the other end. The person might be solidly in the middle of the bell curve for everything but predilection to dehumanization. (Scholarship FTW.)

Alternatively, the person might have a unique situation, such as being blind, isolated, and requiring a reader to speak out received emails in Stephen-Hawking voice, that is such that anyone would experience dehumanization sufficient to make them cheaters. (I'm not claiming this is the case, just that some of similarly plausible set-ups would cause actions, just as time since judges ate affects sentencing.)

So either virtue ethics breaks down as people's uniqueness lies in their responses to biases and/or people's being overwhelmingly, chaotically directed by features of their environments.

Either way, cheaters and thieves are likely to cheat or steal again.

If I can look someone in the face, can usually detect lying. Voice only, can often detect lying. Text only, can sometimes detect lying.

Thus if a person is honest in proportion to the bandwidth, this requires no more psychological explanation than the fact that burglars are apt to burgle at night.

[-]gwern13y130

If I can look someone in the face, can usually detect lying. Voice only, can often detect lying. Text only, can sometimes detect lying.

Is that by the same way you can divine people's true natures?

  • the Wizards Project tested 20,000 people to come up with 50 who panned out
  • an aggregation of techniques offered no better than 70% accuracy
  • people with no instructions did little better than chance in distinguishing lies and truth

But I suppose these results (and the failings of mechanical lie detectors) are just unscientific research, which pale next to the burning truth of your subjective conviction that you "can usually detect lying".

What was the self-assuredness of the 20,000? What was the self-assuredness of the 50?

What was the ability of the top 100, or 1,000, as against the top 50?

[-]gwern13y-20

Does any of that really matter? This is the same person who thinks a passel of cognitive biases doesn't apply to him and that the whole field is nonsense trumped by unexamined common sense. (Talk about 'just give up already'.)

If the top 200 lie-detectors were among the 400 most confident people at the outset, I would think that relevant.

[-]gwern13y-20

And how likely is that, really?

This is the sort of desperate dialectics verging on logical rudeness I find really annoying, trying to rescue a baloney claim by any possibility. If you seriously think that, great - go read the papers and tell me and I will be duly surprised if the human lie-detectors are the best calibrated people in that 20,000 group and hence that factoid might apply to the person we are discussing.

Seems like homework for the person making the claim, I'm just pointing out it exists.

I will be duly surprised if the human lie-detectors are the best calibrated people

Nit-pick, they could be the worst calibrated and what I said would hold, provided the others estimated themselves suitably bad at it.

"According to most versions of virtue ethics, an agent’s primary ethical goal is to cultivate the virtues. The fully virtuous person possesses all the virtues, and so is disposed to do the appropriate thing in all circumstances. [...]

Yet skeptics such as Doris (1998, 2002) and Harman (1999, 2000, 2001, 2003, 2006) argue that situational influences swamp dispositional ones, rendering them predictively and explanatorily impotent. And in both science and philosophy, it is but a single step from such impotence to the dustbin.

We can precisify the skeptics’ argument in the following way. If someone possesses a character trait like a virtue, she is disposed to behave in trait-relevant ways in both actual and counterfactual circumstances. However, exceedingly few people—even the seemingly virtuous—would behave in virtue-relevant ways in both actual and counterfactual circumstances. Seemingly (and normatively) irrelevant situational features like ambient smells, ambient sounds, and degree of hurry overpower whatever feeble dispositions inhere in people’s moral psychology, making them passive pawns of forces they themselves typically do not recognize or consider.

Are individual dispositions really so frail? A firestorm followed the publication of Doris’s and Harman’s arguments that virtue ethics is empirically inadequate. If they are right, virtue ethics is in dire straits: it cannot reasonably recommend that people acquire the virtues if they are not possible properties of “creatures like us”

This seems obviously false to me. It may well be true that, in general, situational influences swamp dispositional ones. But that doesn't mean that it's pointless to try to cultivate virtue and teach yourself to behave virtuously. You might not always succeed, but as long as the effect of dispositional influences isn't entirely neglible, you will succeed more often than if you didn't cultivate virtue.

You could use the same reasoning to argue that consequentialism is in dire straits: Wanting to act in a consequentialist manner is a human disposition, but situational influences swamp dispositional ones. Thus, consequentialism cannot reasonably recommend that people act in a consequentialist manner, because that is not a possible property of "creatures like us".

Alfano is entirely too strict about knowledge, though he rests comfortably in the philosophical landscape there. "Can we know on the basis of folk intuitions that we have traits" isn't as interesting of a question when seen in these terms. He does not address the question "Are our folk intuitions about traits strong Bayesian evidence for their existence?" which would be required to dismiss consideration of folk intuitions entirely as he does. Thus, his claim "We need pay no heed to any attempt to defend virtue ethics that appeals only to intuitions about character traits" has not been proven satisfactorily.

Nonetheless, t's very nice for him that he's discovered that there are biases. Anyone who believes that virtue ethics is true should certainly be aware of the relevant ones.

I submit that the form of his argument could be used just as well against any knowledge claim using those definitions and picking some relevant biases.

Some excerpts:

Why do we have so many trait terms and feel so comfortable navigating the language of traits if actual correlations between traits and individual actions (typically <0.30, as Mischel 1968 persuasively argues)1 are undetectable without the use of sophisticated statistical methodologies (Jennings et al. 1982)?

1 See also Mischel and Peake (1982). Epstein (1983), a personality psychologist, admits that predicting particular behaviors on the basis of trait variables is “usually hopeless.” Fleeson (2001, p. 1013), an interactionist, likewise endorses the 0.30 ceiling.

To answer this question, situationists invoke a veritable pantheon of gods of ignorance and error. Some, like the fundamental attribution error, the false consensus effect, and the power of construal, pertain directly to trait attributions. Others are more general cognitive heuristics and biases, whose relevance to trait attributions requires explanation. These more general heuristics and biases can be classed under the headings of input heuristics and biases and processing heuristics and biases. Input heuristics and biases include selection bias, availability bias, availability cascade, and anchoring. Processing heuristics and biases include disregard of base rates, disregard of regression to the mean, and confirmation bias.

According to Jones and Nisbett (1971, p. 93), the unique breakdown of the fundamental attribution error occurs when we explain what we ourselves have done: instead of underemphasizing the influence of environmental factors, we overemphasize them. Especially when the outcome is negative, we attribute our actions to external factors. This bias seems to tell against situationism, since it suggests that we can recognize the power of situations at least in some cases. However, the existence of such an actor-observer bias has recently come in for trenchant criticism from Malle (2006), whose meta-analysis of three decades worth of data fails to demonstrate a consistent actor-observer asymmetry.2 Malle’s meta-analysis only strengthens the case for the fundamental attribution error. Whereas Jones & Nisbett had argued that it admitted of certain exceptions at least in first-personal cases, Malle shows their exceptionalism to be ungrounded.

In one variation of the [Milgram] obedience experiment, a second experimenter played the role of the victim and begged to be released from the electrodes. Participants in this version of the study had to disagree with one of the experimenters, so a desire to avoid embarrassment and save face would give them no preference for obedience to one experimenter over the other. Nevertheless, in this condition 65% of the participants were maximally obedient to the experimenter in authority, shocking the other experimenter with what they took to be 450 volts three times in a row while he slumped over unconscious (Milgram, p. 95).

When people use the availability heuristic, they take the first few examples of a type that come to mind as emblematic of the whole population. This process can lead to surprisingly accurate conclusions (Gigerenzer 2007, p. 28), but it can also lead to preposterously inaccurate guesses (Tversky and Kahnemann 1973, p. 241). We remember the one time Maria acted benevolently and forget all the times when she failed to show supererogatory kindness, leading us to infer that she must be a benevolent person. Since extremely virtuous and vicious actions are more memorable than ordinary actions, they will typically be the ones we remember when we consider whether someone possesses a trait, leading to over-attribution of both virtues and vices.

In her defense of virtue ethics, Kupperman (2001, p. 243) mentions word-of-mouth testimony that “the one student who, when the Milgram experiment was performed at Princeton, walked out at the start was also the person who in Viet Nam blew the whistle on the My Lai massacre.” Such tales are comforting: perhaps a few people really are compassionate in all kinds of circumstances, whether the battlefield or the lab. But while anecdotes about character may be soothing, it should be clear that anecdotal evidence is at best skewed and biased, as well as prone to misinterpretation. We should focus on the fact that most experimental subjects are easily swayed by normatively irrelevant factors, not the fact that one person might be virtuous.

The existence of these biases does not prove that no one has traits, nor does it demonstrate that no arguments could warrant the conclusion that people have traits. What it instead shows it that regardless of whether people have traits, folk intuitions would lead us to attribute traits to them.

It should be noted here that many of psychologists, such as Fleeson (2001), do believe in traits, and not merely on the basis of folk intuitions. It is beyond the scope of this article to assess the success of their arguments and the extent to which those arguments apply to virtues (which are a distinctive subspecies of traits individuated not merely causally but by their characteristic reasons).

[-]Jack13y30

It seems plausible that our capacity for moral judgment might mirror our capacity for belief formation in that it includes crude but efficient algorithms like what we call cognitive biases in belief formation. But I don't think it follows that we can make our moral judgments 'more accurate' by removing moral 'biases' in favor of some idealized moral formula. What our crude but efficient moral heuristics are approximating is evolutionarily advantageous strategies for our memes and genes. But I don't really care about replicating the things that programmed me-- I just care about what they programmed me to care about.

In belief formation there are likely biases that have evolutionary benefits too- it is easier to deceive others if you sincerely believe you will cooperate when you are in a position to defect without retaliation, for example. But we have an outside standard to check our beliefs against-- experience. We know after many iterations of prediction and experiment which reasons for beliefs are reliable and which are not. Obviously, a good epistemology is a lot trickier than I've made it sound but it seems like, in principle, we can make our beliefs more accurate by checking them against reality.

I can't see an analogous standard for moral judgments. This wouldn't be a big problem if our brains were cleanly divided into value-parts and belief-parts. We could then just fix the belief parts and keep the crude-but-hey-that's-how-evolution-made-us value parts. But it seems like our values and beliefs are all mixed up in our cognitive soup. We need a sieve.

But I don't really care about replicating the things that programmed me-- I just care about what they programmed me to care about.

Tangential public advisory: I suspect that it is a bad cached pattern to focus on the abstraction where it is memes and genes that created you rather than, say, your ecological-developmental history or your self two years ago or various plausibly ideal futures you would like to bring about &c. In the context of decision theory I'll sometimes talk about an agent inheriting the decision policy of its creator process which sometimes causes people to go "well I don't want what evolution wants, nyahhh" which invariably makes me facepalm repeatedly in despair.

Assuming the evidence favors the false consensus effect, we may explain its relevance to the dispute about virtues by pointing out that, since people tend to make such rash inferences, they are prone to over-attributing traits. They could reason as follows: “Well, I helped these strange fellows advertise for Joe’s Bar, so almost anyone would do the same. I guess most people are helpful!” Such an inference, however, is at best dubious.

I do not see how the false consensus effect advances the argument.

LW post on an example used in the common, stronger argument against virtue ethics, that we have no character traits. Stronger in that it makes more ambitious claims, not because it is more likely true.