Ever since Tversky and Kahneman started to gather evidence purporting to show that humans suffer from a large number of cognitive biases, other psychologists and philosophers have criticized these findings. For instance, philosopher L. J. Cohen argued in the 80's that there was something conceptually incoherent with the notion that most adults are irrational (with respect to a certain problem). By some sort of Wittgensteinian logic, he thought that the majority's way of reasoning is by definition right. (Not a high point in the history of analytic philosophy, in my view.) See chapter 8 of this book (where Gigerenzer, below, is also discussed).
Another attempt to resurrect human rationality is due to Gerd Gigerenzer and other psychologists. They have a) shown that if you tweak some of the heuristics and biases (i.e. the research program led by Tversky and Kahneman) experiments but a little - for instance by expressing probabilities in terms of frequencies - people make much fewer mistakes and b) argued, on the back of this, that the heuristics we use are in many situations good (and fast and frugal) rules of thumb (which explains why they are evolutionary adaptive). Regarding this, I don't think that Tversky and Kahneman ever doubted that the heuristics we use are quite useful in many situations. Their point was rather that there are lots of naturally occuring set-ups which fool our fast and frugal heuristics. Gigerenzer's findings are not completely uninteresting - it seems to me he does nuance the thesis of massive irrationality a bit - but his claims to the effect that these heuristics are rational in a strong sense are wildly overblown in my opnion. The Gigerenzer vs. Tversky/Kahneman debates are well discussed in this article (although I think they're too kind to Gigerenzer).
A strong argument against attempts to save human rationality is the argument from individual differences, championed by Keith Stanovich. He argues that the fact that some intelligent subjects consistently avoid to fall prey to the Wason Selection task, the conjunction fallacy, and other fallacies, indicates that there is something misguided with the notion that the answer that psychologists traditionally has seen as normatively correct is in fact misguided.
Hence I side with Tversky and Kahneman in this debate. Let me just mention one interesting and possible succesful method for disputing some supposed biases. This method is to argue that people have other kinds of evidence than the standard interpretation assumes, and that given this new interpretation of the evidence, the supposed bias in question is in fact not a bias. For instance, it has been suggested that the "false consensus effect" can be re-interpreted in this way:
The False Consensus Effect
Bias description: People tend to imagine that everyone responds the way they do. They tend to see their own behavior as typical. The tendency to exaggerate how common one’s opinions and behavior are is called the false consensus effect. For example, in one study, subjects were asked to walk around on campus for 30 minutes, wearing a sign board that said "Repent!". Those who agreed to wear the sign estimated that on average 63.5% of their fellow students would also agree, while those who disagreed estimated 23.3% on average.
Counterclaim (Dawes & Mulford, 1996): The correctness of reasoning is not estimated on the basis of whether or not one arrives at the correct result. Instead, we look at whether reach reasonable conclusions given the data they have. Suppose we ask people to estimate whether an urn contains more blue balls or red balls, after allowing them to draw one ball. If one person first draws a red ball, and another person draws a blue ball, then we should expect them to give different estimates. In the absence of other data, you should treat your own preferences as evidence for the preferences of others. Although the actual mean for people willing to carry a sign saying "Repent!" probably lies somewhere in between of the estimates given, these estimates are quite close to the one-third and two-thirds estimates that would arise from a Bayesian analysis with a uniform prior distribution of belief. A study by the authors suggested that people do actually give their own opinion roughly the right amount of weight.
(The quote is from an excellent Less Wrong article on this topic due to Kaj Sotala. See also this post by him, this by Andy McKenzie, this by Stuart Armstrong and this by lukeprog on this topic. I'm sure there are more that I've missed.)
It strikes me that the notion that people are "massively flawed" is something of an intellectual cornerstone of the Less Wrong community (e.g. note the names "Less Wrong" and "Overcoming Bias"). In the light of this it would be interesting to hear what people have to say about the rationality wars. Do you all agree that people are massively flawed?
Let me make two final notes to keep in mind when discussing these issues. Firstly, even though the heuristics and biases program is sometimes seen as pessimistic, one could turn the tables around: if they're right, we should be able to improve massively (even though Kahneman himself seems to think that that's hard to do in practice). I take it that CFAR and lots of LessWrongers who attempt to "refine their rationality" assume that this is the case. On the other hand, if Gigerenzer or Cohen are right, and we already are very rational, then it would seem that it is hard to do much better. So in a sense the latter are more pessimistic (and conservative) than the former.
Secondly, note that parts of the rationality wars seem to be merely verbal and revolve around how "rationality" is to be defined (tabooing this word is very often a good idea). The real question is not if the fast and frugal heuristics are in some sense rational, but whether there are other mental algorithms which are more reliable and effective, and whether it is plausible to assume that we could learn to use them on a large scale instead.
Certain models of the Pentium processor had errors in their FPU. Some floating point calculations would give the wrong answers. The reason was that in a lookup table inside the FPU, a few values were wrong.
Now, consider the following imaginary conversation:
Customer: "There's a bug in your latest Pentium." (Presents copious evidence ruling out all other possible causes of the errors.)
Intel: "Those aren't errors, the chip's working exactly as designed. Look, here's the complete schematic of the chip, here's the test results for the actual processor, you can see it's working exactly as manufactured."
Customer: "But the schematic is wrong. Look, these values that it lists for that lookup table are wrong, that's why the chip's giving wrong answers."
Intel: "Those values are exactly the ones the engineers put there. What does it mean to say that they're 'wrong'?"
Customer: "It means they're wrong, that's what it means. The chip was supposed to do floating point divisions according to this other spec here." (Gestures towards relevant standards document.) "It doesn't. Somewhere between there and the lookup table, someone must have made a mistake."
Intel: "The engineers designing it took the spec and made that lookup table. The table is exactly what it they made it to be. It makes no sense to call it 'wrong'."
Customer: "The processor says that 4195835/3145727 = 1.333820449136241002. The right answer is 1.333739068902037589. That's a huge error, compared with the precision it's supposed to give."
Intel: "It says 1.333820449136241002, so that's the answer it was designed to give. What does that other computation have to do with it? That's not the calculation it does. I still can't see the problem."
Customer: "It's supposed to be doing division. It's not doing division!"
Intel: "But there are lots of other examples it gets right. You're presenting it with the wrong problems."
Customer: "It's supposed to be right for all examples. It isn't."
Intel: "It does exactly what it does. If it doesn't do something else that you think it ought to be doing instead, that's your problem. And if you want division, it's still a pretty good approximation."
I think this parallels a lot of the discussion on "biases".
A version of Intel's argument is used ny Objectvists to prove that there is no perceptual error.