In "How to Make Cognitive Illusions Disappear: Beyond Heuristics and Biases", Gerd Gigerenzer attempts to show that the whole "Heuristics and Biases" approach to analysing human reasoning is fundamentally flawed and incorrect.
In that he fails. His case depends on using the frequentist argument that probabilities cannot be assigned to single events or situations of subjective uncertainty, thus removing the possibility that people could be "wrong" in the scenarios where the biases were tested. (It is interesting to note that he ends up constructing "Probabilistic Mental Models", which are frequentist ways of assigning subjective probabilities - just as long as you don't call them that!).
But that dodge isn't sufficient. Take the famous example of the conjunction fallacy, where most people are tricked to assigning a higher probability to "Linda is a bank teller AND is active in the feminist movement" than to "Linda is a bank teller". This error persists even when people take bets on the different outcomes. By betting more (or anything) on the first option, people are giving up free money. This is a failure of human reasoning, whatever one thinks about the morality of assigning probability to single events.
However, though the article fails to prove its case, it presents a lot of powerful results that may change how we think about biases. It presents weak evidence that people may be instinctive frequentist statisticians, and much stronger evidence that many biases can go away when the problems are presented in frequentist ways.
Now, it's known that people are more comfortable with frequencies that with probabilities. The examples in the paper extend that intuition. For instance, when people are asked:
There are 100 persons who fit the description above (i.e., Linda's). How many of them are:
(a) bank tellers
(b) bank tellers and active in the feminist movement.
Then the conjunction fallacy essentially disappears (22% of people make the error, rather than 85%). That is a huge difference.
Similarly, overconfidence. When people were 50 general knowledge questions and asked to rate their confidence for their answer on each question, they were systematically, massively overconfident. But when they were asked afterwards "How many of these 50 questions do you think you got right?", they were... underconfident. But only very slightly: they were essentially correct in their self-assessments. This can be seen as a use of the outside view - a use that is, in this case, entirely justified. People know their overall accuracy much better than they know their specific accuracy.
A more intriguing example makes the base-rate fallacy disappear. Presenting the problem in a frequentist way makes the fallacy vanish when computing false positives for tests on rare diseases - that's compatible with the general theme. But it really got interesting when people actively participated in the randomisation process. In the standard problem, students were given thumbnail description of individuals, and asked to guess whether they were more likely to be engineers or lawyers. Half the time the students were told the descriptions were drawn at random from 30 lawyers and 70 engineers; the other half, the proportions were reversed. It turns out that students assigned similar guesses to lawyer and engineer in both setups, showing they were neglecting to use the 30/70 or 70/30 base-rate information.
Gigerenzer modified the setups by telling the students the 30/70 or 70/30 proportions and then having the students themselves drew each description (blindly) out of an urn before assessing it. In that case, base-rate neglect disappears.
Now, I don't find that revelation quite as superlatively exciting as Gigerenzer does. Having the students draw the description out of the urn is pretty close to whacking them on the head with the base-rate: it really focuses their attention on this aspect, and once it's risen to their attention, they're much more likely to make use of it. It's still very interesting, though, and suggests some practical ways of overcoming the base-rate problem that stop short of saying "hey, don't forget the base-rate".
There is a large literature out there critiquing the heuristics and biases tradition. Even if they fail to prove their point, they're certainly useful for qualifying the biases and heuristics results, and, more interestingly, for suggesting practical ways of combating their effects.
What is more likely?
A: People gave wrong answer AND weren't misinterpreting the question
B: People gave wrong answer
Wrong framing*. The question is between
A: People gave wrong answer AND weren't misinterpreting the question
B: People gave wrong answer AND were misinterpreting the question
*Unless that was a joke, in which case disregard