An attempt to 'explain away' virtue ethics
Recently I summarized Joshua Greene's attempt to 'explain away' deontological ethics by revealing the cognitive algorithms that generate deontological judgments and showing that the causes of our deontological judgments are inconsistent with normative principles we would endorse.
Mark Alfano has recently done the same thing with virtue ethics (which generally requires a fairly robust theory of character trait possession) in his March 2011 article on the topic:
I discuss the attribution errors, which are peculiar to our folk intuitions about traits. Next, I turn to the input heuristics and biases, which — though they apply more broadly than just to reasoning about traits — entail further errors in our judgments about trait-possession. After that, I discuss the processing heuristics and biases, which again apply more broadly than the attribution errors but are nevertheless relevant to intuitions about traits... I explain what the biases are, cite the relevant authorities, and draw inferences from them in order to show their relevance to the dialectic about virtue ethics. At the end of the article, I evaluate knowledge-claims about virtues in light of these attribution biases, input heuristics and biases, and processing heuristics and biases. Every widely accepted theory of knowledge must reject such knowledge-claims when they are based merely on folk intuitions.
An overview of the 'situationist' attack on character trait possession can be found in Doris' book Lack of Character.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (24)
This seems obviously false to me. It may well be true that, in general, situational influences swamp dispositional ones. But that doesn't mean that it's pointless to try to cultivate virtue and teach yourself to behave virtuously. You might not always succeed, but as long as the effect of dispositional influences isn't entirely neglible, you will succeed more often than if you didn't cultivate virtue.
You could use the same reasoning to argue that consequentialism is in dire straits: Wanting to act in a consequentialist manner is a human disposition, but situational influences swamp dispositional ones. Thus, consequentialism cannot reasonably recommend that people act in a consequentialist manner, because that is not a possible property of "creatures like us".
Alfano says:
This sounds absurd on its face. If Alfano finds out that someone has a history of cheating and stealing, will he avoid having any business with this person, expecting similar behavior in the future, or will he "reject such knowledge-claims... based merely on folk intuitions"?
Are his claims really so silly, or am I missing something?
If a person's history is to cheat in business, it might be that the person habitually and easily lies whenever on the phone and he or she can't see who is on the other end. The person might be solidly in the middle of the bell curve for everything but predilection to dehumanization. (Scholarship FTW.)
Alternatively, the person might have a unique situation, such as being blind, isolated, and requiring a reader to speak out received emails in Stephen-Hawking voice, that is such that anyone would experience dehumanization sufficient to make them cheaters. (I'm not claiming this is the case, just that some of similarly plausible set-ups would cause actions, just as time since judges ate affects sentencing.)
So either virtue ethics breaks down as people's uniqueness lies in their responses to biases and/or people's being overwhelmingly, chaotically directed by features of their environments.
Either way, cheaters and thieves are likely to cheat or steal again.
If I can look someone in the face, can usually detect lying. Voice only, can often detect lying. Text only, can sometimes detect lying.
Thus if a person is honest in proportion to the bandwidth, this requires no more psychological explanation than the fact that burglars are apt to burgle at night.
Is that by the same way you can divine people's true natures?
But I suppose these results (and the failings of mechanical lie detectors) are just unscientific research, which pale next to the burning truth of your subjective conviction that you "can usually detect lying".
What was the self-assuredness of the 20,000? What was the self-assuredness of the 50?
What was the ability of the top 100, or 1,000, as against the top 50?
Does any of that really matter? This is the same person who thinks a passel of cognitive biases doesn't apply to him and that the whole field is nonsense trumped by unexamined common sense. (Talk about 'just give up already'.)
If the top 200 lie-detectors were among the 400 most confident people at the outset, I would think that relevant.
And how likely is that, really?
This is the sort of desperate dialectics verging on logical rudeness I find really annoying, trying to rescue a baloney claim by any possibility. If you seriously think that, great - go read the papers and tell me and I will be duly surprised if the human lie-detectors are the best calibrated people in that 20,000 group and hence that factoid might apply to the person we are discussing.
Seems like homework for the person making the claim, I'm just pointing out it exists.
Nit-pick, they could be the worst calibrated and what I said would hold, provided the others estimated themselves suitably bad at it.
Alfano is entirely too strict about knowledge, though he rests comfortably in the philosophical landscape there. "Can we know on the basis of folk intuitions that we have traits" isn't as interesting of a question when seen in these terms. He does not address the question "Are our folk intuitions about traits strong Bayesian evidence for their existence?" which would be required to dismiss consideration of folk intuitions entirely as he does. Thus, his claim "We need pay no heed to any attempt to defend virtue ethics that appeals only to intuitions about character traits" has not been proven satisfactorily.
Nonetheless, t's very nice for him that he's discovered that there are biases. Anyone who believes that virtue ethics is true should certainly be aware of the relevant ones.
I submit that the form of his argument could be used just as well against any knowledge claim using those definitions and picking some relevant biases.
It seems plausible that our capacity for moral judgment might mirror our capacity for belief formation in that it includes crude but efficient algorithms like what we call cognitive biases in belief formation. But I don't think it follows that we can make our moral judgments 'more accurate' by removing moral 'biases' in favor of some idealized moral formula. What our crude but efficient moral heuristics are approximating is evolutionarily advantageous strategies for our memes and genes. But I don't really care about replicating the things that programmed me-- I just care about what they programmed me to care about.
In belief formation there are likely biases that have evolutionary benefits too- it is easier to deceive others if you sincerely believe you will cooperate when you are in a position to defect without retaliation, for example. But we have an outside standard to check our beliefs against-- experience. We know after many iterations of prediction and experiment which reasons for beliefs are reliable and which are not. Obviously, a good epistemology is a lot trickier than I've made it sound but it seems like, in principle, we can make our beliefs more accurate by checking them against reality.
I can't see an analogous standard for moral judgments. This wouldn't be a big problem if our brains were cleanly divided into value-parts and belief-parts. We could then just fix the belief parts and keep the crude-but-hey-that's-how-evolution-made-us value parts. But it seems like our values and beliefs are all mixed up in our cognitive soup. We need a sieve.
Tangential public advisory: I suspect that it is a bad cached pattern to focus on the abstraction where it is memes and genes that created you rather than, say, your ecological-developmental history or your self two years ago or various plausibly ideal futures you would like to bring about &c. In the context of decision theory I'll sometimes talk about an agent inheriting the decision policy of its creator process which sometimes causes people to go "well I don't want what evolution wants, nyahhh" which invariably makes me facepalm repeatedly in despair.
I do not see how the false consensus effect advances the argument.
Some excerpts:
LW post on an example used in the common, stronger argument against virtue ethics, that we have no character traits. Stronger in that it makes more ambitious claims, not because it is more likely true.