I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn't help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.
Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.
Mathematicians aren't biased by being told "I colored 200 of 600 balls black" vs. "I colored all but 400 of 600 balls black", because the question "how to color the most balls" has a correct answer in the model used. This is true even if the model is unique to the mathematician answering the question: the most important thing is consistency.
If a moral theory can't prove the correctness of an answer to a very simple problem - a choice between just two alternatives, trading off in clearly morally significant issues (lives), without any complications (e.g. the different people who may die don't have any distinguishing features) - then it probably doesn't give clear answers to most other problems too, so what use is it?
I find this amusing and slightly disturbing - but the Trolley Problem seems like a terrible example. A rational person might answer based on political considerations, which "order effects" might change in everyday conversations.
which found that professional moral philosophers are no less subject to the effects of framing and order of presentation
I think some people are missing the issue. It's not that they have a problem with the Trolley Problem, but that their answers vary according to irrelevant framing effects like order of presentation.
I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily
Where did that assumption come from?
Physics professors have no such problem. Philosophy professors, however, are a different story.
If you ask physics professors questions that go counter to human intuition I wouldn't be to sure that they get them right either.
Where did that assumption come from?
This assumption comes from expecting an expert to know the basics of their field.
If you ask physics professors questions that go counter to human intuition I wouldn't be to sure that they get them right either.
A trained physicist's intuition is rather different from "human intuition" on physics problems, so that's unlikely.
...Teleological thinking, the belief that certain phenomena are better explained by purpose than by cause, is not scientific. An example of teleological thinking in biology would be: “Plants emit oxygen so that animals can breathe.” However, a recent study (Deborah Kelemen and others, Journal Experimental Psychology, 2013) showed that there is a strong, but suppressed, tendency towards teleological thinking among scientists, which
Framing effect in math:
"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" — Jerry Bona
This is a joke: although the three are all mathematically equivalent, many mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn's lemma to be too complex for any intuition.
This doesn't really bother me. Philosophers' expertise is not in making specific moral judgements, but in making arguments and counterarguments. I think that is a useful skill that collectively gets us closer to the truth.
I remember long ago, when somebody wanted to emulate a small routine for adding big numbers, using a crowd of people as an arithmetic unit. The task was simple for everyone in the crowd. Like just add two given numbers from 0 to 9 and report the integer part of the result divided by 10 to the next one in the crowd and remember your result modulo 10.
The crowd was assembled of mathematicians. Still, at every attempt someone made an error, while adding 5 and 7 or something.
I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?
Why do you think it's possible to be an expert at a barely-coherent subject?
...Teleological thinking, the belief that certain phenomena are better explained by purpose than by cause, is not scientific. An example of teleological thinking in biology would be: “Plants emit oxygen so that animals can breathe.” However, a recent study (Deborah Kelemen and others, Journal Experimental Psychology, 2013) showed that there is a strong, but suppressed, tendency towards teleological thinking among scientists, which surf
On the other hand, in the last 100-120 years very few interesting philosophy was produced by non-professors. My favorites are Thomas Nagel, Philippa Foot etc. are/were all profs. Seems like it is a necessary, but not sufficient condition. Or maybe not as much as a condition as universities being good at recognizing good ones and throwing jobs at them, but they seem to have too many jobs and not enough good candidates.
There are four elephant in the room issues surrounding ratiinality.
1 [Rationality is more than one thing];
2 Biases are almost impossible to overcome;
3 Confirmation bias is adaptive to group discussion
4 If biases are so harmful, why don't they get selected out?
If biases are so harmful, why don't they get selected out?
We have good reason to believe that many biases are the results of cognitive shortcuts designed to speed up decisions making, but not in all cases. Mercier and Speaker's Argumentative Theory of Rationality suggests that confirmation bias is an adaptation to arguing things out in groups: that's why people adopt a single point of view, and stick to it in the face of almost all opposition. You don't get good quality discussion from a bunch if people saying There are Arguments on Both Sides,
"Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others' arguments. M&S also plead for the "rehabilitation" of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view."
Societies have systems and structures in place for ameliorating and leveraging confirmation bias. For instance, replication and off crosschecking in science ameliorate the tendency of research groups succumb to bias. Adversarial legal processes and party politics leverage the tendency, in ordered get good arguments made for both sides of a question. Values such as speaking ones mind (as opposed to agreeing with leaders), offering and accepting criticism also support rationality.
Now, teaching rationality, in the sense of learning to personally overcome bias has a problem in that it may not be possible to do fully, and it has a further problem in that it may not be a good idea. Teaching someone to overcome confirmation bias in a sense , to see two or more sides to the story, is, in a sense, teaching them to internalise the process of argument, to be solo rationalists. And while society perhaps needs some people like these, it perhaps also doesn't need many. Forms of solo rationality training have existed for a long time, eg philosophy, but they are do most suit a lot of people's preferences, and not a lot of people can succeed at them, since they are cognitively difficult
If you plug solo ration Ists into systems designed for the standard human, you are likely to get an impedance mismatch, not improved rationality. If you wanted to increase overall rationality by increasing average rationality, assuming that is feasible in the first place, you would have to redesign systems. But you could probably increase overall rationality by improving systems anyway...we live in a world where medicine, lf all things, isnt routinely based on good quality evidence
Some expansion of point 4 If biases are so harmful, why don't they get selected out?
"During the last 25 years, researchers studying human reasoning and judgment in what has become known as the "heuristics and biases" tradition have produced an impressive body of experimental work which many have seen as having "bleak implications" for the rationality of ordinary people (Nisbett and Borgida 1975). According to one proponent of this view, when we reason about probability we fall victim to "inevitable illusions" (Piattelli-P...
Imagine someone finding out that "Physics professors fail on basic physics problems". This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that.
Philosophy professors, however, are a different story. Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers. This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.
Abstract:
We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.
Some quotes (emphasis mine):
When scenario pairs were presented in order AB, participants responded differently than when the same scenario pairs were presented in order BA, and the philosophers showed no less of a shift than did the comparison groups, across several types of scenario.
[...] we could find no level of philosophical expertise that reduced the size of the order effects or the framing effects on judgments of specific cases. Across the board, professional philosophers (94% with PhD’s) showed about the same size order and framing effects as similarly educated non-philosophers. Nor were order effects and framing effects reduced by assignment to a condition enforcing a delay before responding and encouraging participants to reflect on “different variants of the scenario or different ways of describing the case”. Nor were order effects any smaller for the majority of philosopher participants reporting antecedent familiarity with the issues. Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation. Nor were order effects any smaller for the minority of philosopher participants reporting that before participating in our experiment they had stable views about the issues under investigation.
I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?