Comment author: anonym 05 September 2011 05:30:24PM *  3 points [-]

I don't recall any discussion on LW -- and couldn't find any with a quick search -- about the "Great Rationality Debate", which Stanovich summarizes as:

An important research tradition in the cognitive psychology of reasoning--called the heuristics and biases approach--has firmly established that people’s responses often deviate from the performance considered normative on many reasoning tasks. For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they display illogical framing effects, they uneconomically honor sunk costs, they allow prior knowledge to become implicated in deductive reasoning, and they display numerous other information processing biases (for summaries of the large literature, see Baron, 1998, 2000; Dawes, 1998; Evans, 1989; Evans & Over, 1996; Kahneman & Tversky, 1972, 1984, 2000; Kahneman, Slovic, & Tversky, 1982; Nickerson, 1998; Shafir & Tversky, 1995; Stanovich, 1999; Tversky, 1996).

It has been common for these empirical demonstrations of a gap between descriptive and normative models of reasoning and decision making to be taken as indications that systematic irrationalities characterize human cognition. However, over the last decade, an alternative interpretation of these findings has been championed by various evolutionary psychologists, adaptationist modelers, and ecological theorists (Anderson, 1990, 1991; Chater & Oaksford, 2000; Cosmides & Tooby, 1992; 1994b, 1996; Gigerenzer, 1996a; Oaksford & Chater, 1998, 2001; Rode, Cosmides, Hell, & Tooby, 1999; Todd & Gigerenzer, 2000). They have reinterpreted the modal response in most of the classic heuristics and biases experiments as indicating an optimal information processing adaptation on the part of the subjects. It is argued by these investigators that the research in the heuristics and biases tradition has not demonstrated human irrationality at all and that a Panglossian position (see Stanovich & West, 2000) which assumes perfect human rationality is the proper default position to take.

Stanovich, K. E., & West, R. F. (2003). Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality. In D. E. Over (Ed.), Evolution and the psychology of thinking: The debate, Psychological Press. [Series on Current Issues in Thinking and Reasoning]

The lack of discussion seems like a curious gap given the strong support to both the schools of thought that Cosmides/Tooby/etc. represent on the one hand, and Kahneman/Tversky/etc. on the other, and that they are in radical opposition on the question of the nature of human rationality and purported deviations from it, both of which are central subjects of this site.

I don't expect to find much support here for the Tooby/Cosmides position on the issue, but I'm surprised that there doesn't seem to have been any discussion of the issue. Maybe I've missed discussions or posts though.

Comment author: rehoot 06 September 2011 03:39:42AM 3 points [-]

I don't understand the basis for the Cosmides and Tooby claim. In their first study, Cosmides and Tooby (1996) solved the difficult part of a Bayesian problem so that the solution could be found by a "cut and paste" approach. The second study was about the same with some unnecessary percentages deleted (they were not needed for the cut and paste solution--yet the authors were surprised when performance improved). Study 3 = Study 2. Study 4 has the respondents literally fill in the blanks of a diagram based on the numbers written in the question. 92% of the students answered that one correctly. Studies 5 & 6 returned the percentages and the students made many errors.

Instead of showing innate, perfect reasoning, the study tells me that students at Yale have trouble with Bayesian reasoning when the question is framed in terms of percentages. The easy versions do not seem to demonstrate the type of complex reasoning that is needed to see the problem and frame it without somebody framing it for you. Perhaps Cosmides and Tooby are correct when they show that there is some evidence that people use a "calculus of probability" but their study showed that people cannot frame the problems without overwhelming amounts of help from somebody who knows the correct answer.

Reference

Cosmides, L. & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition 58, 1–73, DOI: 10.1016/0010-0277(95)00664-8

Comment author: rehoot 04 September 2011 03:21:53AM 2 points [-]

Yvain said:

You can end up utilitarian either because you're a psychopath and don't have the special moral module - in which case you default to general purpose reasoning - or because you're very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.

I participate in utilitarian forum, and from that experience I would add to the quote above by saying that there are some people who encountered emotional arguments about "painism" or "speciesism" (e.g. arguments from Singer; Ryder, and the like) and followed those arguments to utilitarianism. I would expect that there are few people in this category as a percentage of the total population (in part because few people seriously study ethics of any kind and fewer still find their way to that end).

Comment author: rehoot 18 August 2011 08:26:55PM 0 points [-]

I looked at the IB web page and it appears to be "critical thinking" as opposed to direct instruction in logic or other more-practical reasoning skills. The first problem is that there is lack of agreement about what critical thinking is (Lloyd and Bahr, 2010). Another problem is whether critical thinking skills are generalizable. What I know of critical thinking assessment is that there is emphasis on high-level approach to problems and a lack or complete absence of formal logic, math, statistics, or other specific skills that might help people to solve complex problems. There is no way that a class in critical thinking can substitute skills for logic, math, or statistics, but if it introduces topics that help kids advance to the next level, then it might be worth it. If teachers assert that people who complete the course "have good reasoning skills," then the class could be doing more harm than good.

Reference

Lloyd, M. and Bahr, N. (2010). Thinking Critically about Critical Thinking in Higher Education. International Journal for the Scholarship of Teaching and Learning. Vol. 4, No. 2 (July 2010)
http://academics.georgiasouthern.edu/ijsotl/v4n2/articles/PDFs/_LloydBahr.pdf