I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn't help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.
Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.
Mathematicians aren't biased by being told "I colored 200 of 600 balls black" vs. "I colored all but 400 of 600 balls black", because the question "how to color the most balls" has a correct answer in the model used. This is true even if the model is unique to the mathematician answering the question: the most important thing is consistency.
If a moral theory can't prove the correctness of an answer to a very simple problem - a choice between just two alternatives, trading off in clearly morally significant issues (lives), without any complications (e.g. the different people who may die don't have any distinguishing features) - then it probably doesn't give clear answers to most other problems too, so what use is it?
I find this amusing and slightly disturbing - but the Trolley Problem seems like a terrible example. A rational person might answer based on political considerations, which "order effects" might change in everyday conversations.
which found that professional moral philosophers are no less subject to the effects of framing and order of presentation
I think some people are missing the issue. It's not that they have a problem with the Trolley Problem, but that their answers vary according to irrelevant framing effects like order of presentation.
I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily
Where did that assumption come from?
Physics professors have no such problem. Philosophy professors, however, are a different story.
If you ask physics professors questions that go counter to human intuition I wouldn't be to sure that they get them right either.
Where did that assumption come from?
This assumption comes from expecting an expert to know the basics of their field.
If you ask physics professors questions that go counter to human intuition I wouldn't be to sure that they get them right either.
A trained physicist's intuition is rather different from "human intuition" on physics problems, so that's unlikely.
...Teleological thinking, the belief that certain phenomena are better explained by purpose than by cause, is not scientific. An example of teleological thinking in biology would be: “Plants emit oxygen so that animals can breathe.” However, a recent study (Deborah Kelemen and others, Journal Experimental Psychology, 2013) showed that there is a strong, but suppressed, tendency towards teleological thinking among scientists, which
Framing effect in math:
"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" — Jerry Bona
This is a joke: although the three are all mathematically equivalent, many mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn's lemma to be too complex for any intuition.
This doesn't really bother me. Philosophers' expertise is not in making specific moral judgements, but in making arguments and counterarguments. I think that is a useful skill that collectively gets us closer to the truth.
I remember long ago, when somebody wanted to emulate a small routine for adding big numbers, using a crowd of people as an arithmetic unit. The task was simple for everyone in the crowd. Like just add two given numbers from 0 to 9 and report the integer part of the result divided by 10 to the next one in the crowd and remember your result modulo 10.
The crowd was assembled of mathematicians. Still, at every attempt someone made an error, while adding 5 and 7 or something.
I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?
Why do you think it's possible to be an expert at a barely-coherent subject?
...Teleological thinking, the belief that certain phenomena are better explained by purpose than by cause, is not scientific. An example of teleological thinking in biology would be: “Plants emit oxygen so that animals can breathe.” However, a recent study (Deborah Kelemen and others, Journal Experimental Psychology, 2013) showed that there is a strong, but suppressed, tendency towards teleological thinking among scientists, which surf
On the other hand, in the last 100-120 years very few interesting philosophy was produced by non-professors. My favorites are Thomas Nagel, Philippa Foot etc. are/were all profs. Seems like it is a necessary, but not sufficient condition. Or maybe not as much as a condition as universities being good at recognizing good ones and throwing jobs at them, but they seem to have too many jobs and not enough good candidates.
Imagine someone finding out that "Physics professors fail on basic physics problems". This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that.
Philosophy professors, however, are a different story. Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers. This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.
Abstract:
We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.
Some quotes (emphasis mine):
When scenario pairs were presented in order AB, participants responded differently than when the same scenario pairs were presented in order BA, and the philosophers showed no less of a shift than did the comparison groups, across several types of scenario.
[...] we could find no level of philosophical expertise that reduced the size of the order effects or the framing effects on judgments of specific cases. Across the board, professional philosophers (94% with PhD’s) showed about the same size order and framing effects as similarly educated non-philosophers. Nor were order effects and framing effects reduced by assignment to a condition enforcing a delay before responding and encouraging participants to reflect on “different variants of the scenario or different ways of describing the case”. Nor were order effects any smaller for the majority of philosopher participants reporting antecedent familiarity with the issues. Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation. Nor were order effects any smaller for the minority of philosopher participants reporting that before participating in our experiment they had stable views about the issues under investigation.
I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?