It is generally assumed around here that people can learn to be more rational. That's the purpose of The Sequences, after all. And veteran Less Wrongers do seem (to me) vastly more rational than the average person.

But maybe it's a selection effect: maybe Less Wrong doesn't make people more rational, it's just that the people who are already relatively rational are the ones most likely to be attracted Less Wrong.

Daniel Willingham (2008) thinks it's pretty hard to teach rationality / critical thinking,1 but what evidence do we have on the matter? Is rationality teachable?

 

Statistics and logic training

Statistics training appears to help. Schoemaker (1979) found that students who had taken a statistics course gave more consistent answers to questions about gambles than those who hadn't taken a statistics course. Students who had not taken the course were also more likely to bid more money than they could possibly win.

Fong et al. (1986) found that statistical training sometimes transfers to real-world decision making. Half the men in a statistics course were interviewed at the beginning of the course, and the other half were interviewed at its end. The interview was ostensibly about sports, but was intended to test for skills in applying statistics to everyday life. The men interviewed after the course did significantly better in giving statistics-informed answers than those answered at the beginning of the course.

For example, when asked why a Rookie of the Year in baseball usually does less well in his second year than in his first year, those interviewed at the beginning of the course tended to give answers like "he's resting on his laurels; he's not trying so hard his second year," while those interviewed at the end of the course tended to give answers which appreciated regression toward the mean.

What about logic? Nisbett et al. (1987) conducted several tests on the effects of different kinds of training on logical skills. Perhaps surprisingly, they found that logical skills (as measured by their tests) did not improve during courses on formal logic, informal fallacies, or two years of graduate courses in chemistry, but did improve as a result of two years of graduate courses in law, medicine, or psychology. Additionally, Lipman (1983) found that a Philosophy for Children course increased the logical thinking skills of children in grades four to eight.

 

Debiasing

To examine the conflict that can arise between fast, intuitive reasoning and slow, deliberate reasoning, consider the following problems:

Imagine that in order to win a prize you have to pick a red marble from one of two urns (Urn A and B). Urn A contains 20 red and 80 blue marbles, and Urn B contains 1 red and 9 blue marbles. When you respond to the task you can compare the ratio of winning marbles in each urn (20% vs 10%), which requires some time, mental effort, and computations, or you can simply rely on the feeling/intuition that it is preferable to pick from the urn with more 'favourable events'. In this example both processes cue the normatively correct answer (that is, Urn A). On the other hand, it is possible to set up a task where [intuitive] and [deliberative] reasoning cue different responses. For example, if you can choose between picking a marble from an urn containing 10 red and 90 blue marbles, or from an urn containing 2 red and 8 blue marbles, the feeling/intuition that it is preferable to pick from the urn with more 'favourable events' results in a normatively incorrect choice.2

When faced with such conflicts, most people tend to give intuitive answers rather than normatively correct answers.3 However, some studies have found that those with greater cognitive capacity (more working memory, etc.) are less likely to use intuitions inappropriately.4 Moreover, biases increase when working memory is burdened with cognitive load.5 On the other hand, many biases are just as prevalent among people of high intelligence and greater cognitive capacity as among others.6

So: does debiasing work? Can resistance to cognitive biases be taught?

Preliminary evidence suggests that debiasing can be done:

  • A simple instruction to "think about alternatives" can promote resistance to overconfidence and confirmation bias. In one study, subjects asked to generate their own hypotheses are more responsive to their accuracy than subjects asked to choose from among pre-picked hypotheses.7 Another study required subjects to list reasons for and against each of the possible answers to each question on a quiz prior to choosing an answer and assessing the probability of its being correct. This process resulted in more accurate confidence judgments relative to a control group.8 
  • Training in microeconomics can help subjects avoid the sunk cost fallacy.9 
  • Because people avoid the base rate fallacy more often when they encounter problems phrased in terms of frequencies instead of probabilities,10 teaching people to translate probabilistic reasoning tasks into frequency formats improves their performance.11 
  • Warning people about biases can decrease their prevalence. So far, this has been demonstrated to work with regard to framing effects,12 hindsight bias,13 and the outcome effect,14 though attempts to mitigate anchoring effects by warning people about them have produced weak results so far.15

 

Conclusion

To be sure, many attempts to teach rationality have failed. But with results like those cited above, when I consider the prospects for teaching rationality I am hopeful, and left with a sense that more is possible. I believe we can indeed raise the sanity waterline.

 

 

Notes

1 'Critical thinking' usually refers to a subset of some of the most basic skills sometimes called 'rationality skills' by Less Wrongers, and usually the more qualitative forms of those skills, and is thus more aligned with Traditional Rationality than with Technical Rationality. As Willingham puts it:

In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth.

2 Chiesi et al. (2011).

3 See, e.g. Klaczynski (2001).

4 Stanovich & West (2000, 2008a).

5 De Neys (2006a, 2006b).

6 Stanovich & West (2008b, 2008c).

7 Koehler (1994).

8 Koriat et al. (1980). Also see Soll & Klayman (2004); Mussweiler et al. (2000).

9 Larrick et al. (1990).

10 Gigerenzer & Hoffrage (1995).

11 Sedlmeier (1999).

12 Cheng & Wu (2010).

13 Hasher et al. (1981); Reimers & Butler (1992).

14 Clarkson et al. (2002).

15 Block & Harper (1991); George et al. (2000).

 

References

Block & Harper (1991). Overconfidence in estimation: testing the anchoring-and-adjustment hypothesis. Organizational Behavior and Human Decision Processes, 49: 188–207.

Cheng & Wu (2010). Debiasing the framing effect: The effect of warning and involvement. Decision Support Systems, 49: 328-334.

Chiesi, Primi, & Morsanyi (2011). Developmental changes in probabilistic reasoning: The role of cognitive capacity, instructions, thinking styles, and relevant knowledge. Thinking and Reasoning, 17: 315–350.

Clarkson, Emby, & Watt (2002). Debiasing the effect of outcome knowledge: the role of instructions in an audit litigation setting. Auditing: A Journal of Practice and Theory, 21: 1–14.

De Neys (2006a). Dual processing in reasoning: Two systems but one reasonerPsychological Science, 17: 428–433.

De Neys (2006b). Automatic-heuristic and executive-analytic processing in reasoning: Chronometric and dual task considerations. Quarterly Journal of Experimental Psychology, 59: 1070–1100.

Fong, Krantz, and Nisbett (1986). The effects of statistical training on thinking about everyday problemsCognitive Psychology, 18: 253-292.

George, Duffy, & Ahuja (2000). Countering the anchoring and adjustment bias with decision support systems. Decision Support Systems, 29: 195–206.

Gigerenzer & Hoffrage (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102: 684–704.

Hasher, Attig, & Alba (1981). I knew it all along: or did I? Journal of Verbal and Learning Behavior, 20: 86-96.

Klaczynski (2001). Framing effects on adolescent task representations, analytic and heuristic processing, and decision making: Implications for the normative/descriptive gap. Journal of Applied Developmental Psychology, 22: 289-309.

Koehler (1994). Hypothesis generation and confidence in judgment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20: 461-469.

Koriat, Lichtenstein, & Fischhoff (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6: 107-118.

Larrick, Morgan, & Nisbett (1990). Teaching the use of cost-benefit reasoning in everyday life. Psychological Science, 1: 362-370.

Lipman (1983). Thinking Skills Fostered by Philosophy for Children. Institute for the Advancement of Philosophy for Children.

Mussweiler, Strack, & Pfeiffer (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26: 1142–50.

Nisbett, Fong, Lehman, and Cheng (1987). Teaching reasoning. Science, 238: 625-631.

Reimers & Butler (1992). The effect of outcome knowledge on auditor's judgmental evaluations. Accounting, Organizations and Society, 17: 185–194.

Sedlmeier (1999). Improving Statistical Reasoning: Theoretical Models and Practical Implications. Erlbaum.

Shoemaker (1979). The role of statistical knowledge in gambling decisions: Moment vs. risk dimension approaches. Organizational Behavior and Human Performance, 24: 1-17.

Soll & Klayman (2004). Overconfidence in interval estimates. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30: 299–314.

Stanovich & West (2000). Individual differences in reasoning: Implications for the rationality debate. Behavior Brain Science, 23: 645–726.

Stanovich & West (2008a). Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality. In Over (ed.), Evolution and the psychology of thinking (pp. 171–230). Psychology Press.

Stanovich & West (2008b). On the failure of cognitive ability to predict myside and one-sided thinking biases. Thinking and Reasoning, 14: 129-167.

Stanovich & West (2008c). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 94: 672-695.

Willingham (2008). Critical thinking: Why is it so hard to teach? Arts Education Policy Review, 109: 21-32.

New to LessWrong?

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 12:18 PM

Please paraphrase the conclusion in the introduction. This should be something more like an abstract, so I can an answer with minimal digging.

The opposite end of this spectrum has network news teasers. "Will your childrens' hyberbolic discounting affect your retirement? Find out at 11"

I have data from an unpublished experiment on factors affecting calibration. People with higher education levels were better calibrated, and people from more "rational" occupations like "scientist" or "computer programmer" were better calibrated than people from less "rational" occupations like sales or design. People whose job involved working with risk and probabilities directly (eg investment banker, weatherman) were just below scientists.

There was an attempt to investigate whether gambling helped, but it got contradictory results: frequent gamblers were worse calibrated on questions like "What percentage chance France is bigger than Germany?" but better calibrated at predicting future events. I don't understand why this happened.

This doesn't distinguish between education and science education making people more rational, and more rational people getting more education and going into science, but it's at least a little positive.

With the gamblers, there could be two factors fighting it out:

1.) Maybe you're more likely to gamble lots in the first place if you're badly calibrated because you think you're more likely to win than you actually are.

2.) But once you gamble regularly (and if you keep doing so) you might need to develop the ability to predict the future states of games.

No 2 seems to explain the calibration with regards to future events. Maybe gamblers are more likely to display overconfidence with regards to other calibration tasks though as this overconfidence helps explain why they would choose to gamble.

Just a guess though, no solid reason to believe it.

It's possible that the camel has two humps (pdf). We don't have much evidence regarding the shapes of the subskill distributions. The connection between "rationality" and debiasing is questionable (correlation versus causality selection effects tag alongs bla bla bla). Beware motivated cognition on this subject.

[-]gwern13y290

Speaking of the camel has two humps, replication has turned out to be difficult and the idea doesn't look very good anymore: see first quote in http://www.gwern.net/Notes#the-camel-has-two-humps

Sweet, thank you.

Your notion being that being able to avoid bias might not give many practical benefits?

No, I think he's simply saying some subjects have more aptitude for the topic than others, and this distribution may be bimodal - explaining why some results are good and some are bad. None of these studies address whether the epistemic debiasing leads to improved or worsened instrumental results. (That particular idea, that improved epistemic rationality can lead to decreased instrumental rationality, would be the 'Valley of Bad Rationality'.)

Interesting post, I'm glad to see that there is evidence that learning about biases can at least diminish their effect. Most of the things I've read regarding biases seems to imply that they are conquerable ("try really hard to remember that you must"...) without actually presenting evidence that subjects aware of their biases tended to overcome them more effectively. It's as though the burden of evidence only applies to proving the existence of the bias, and hand-waiving is sufficient for strategies to conquer it.

I'm mostly curious about the (slightly touched on here) question of whether rationality training of any kind carries over into everyday life and big-picture issues. I don't know that answering baseball questions after taking a statistics course is a good test: Everyone who knows baseball thinks of statistics as clearly applicable to it because you hear them mentioned together constantly. There's a pattern already present for "baseball question"->"statistics answer".

I'd love to see some very general happiness studies done: Would a group of people given a bunch of rationality training beat a control group in happiness/life-satisfaction/goal-achievement after a few years? Would they beat a group of people who were specifically trained in happiness/life-satisfaction/goal-achievement? Would they be happier about the choices they'd made, or just better at arguing why they were the right choices?

Responding just to the title of the post, doesn't the simple question, "Is rationality teachable?" really boil down to:

"Is rationality a characteristic that is to a significant extent accessible to modification through behavioral means?"

And if you try on the answer, "Nope, it's pretty much all hardwired in," it seems pretty obviously ridiculous, so...

Is that a stupid answer?

(I mean, I guess you're not actually asking if rationality is teachable in strict theoretical principle, but: "Are we at lesswrong really doing anything to cause a significantly greater increase in the rationality of our members from where they were when they came to us than would have obtained anyway if they hadn't found us?")

I remember a paper on teaching programming that seems relevant to the question.

http://www.codinghorror.com/blog/2006/07/separating-programming-sheep-from-non-programming-goats.html

The interesting thing here is that either people formed a coherent mental model of what the computer might be doing, or they didn't. Those that did, even if they formed the wrong mental model at first, went on to do well at programming. Those that didn't programmed very poorly, and got even worse as the subject matter grew more complex.

I wouldn't be surprised if the same divide operated here - those who reason rationally about the world in some way can be taught to do better. Those who essentially don't form that kind of mental model of the world at all essentially can't easily be taught to do better.

Already discussed here: http://lesswrong.com/lw/76x/is_rationality_teachable/4qq9 (see especially my comment on replication)

Poker is an excellent teaching vehicle. It really motivates to learn about probabilities, because it makes you win. It teaches you the emotional strength to accept sunk costs, because it reduces your unavoidable losses.

I find poker to be a fantastic teaching tool. You make the best decisions and then are confronted with results that vary wildly in the short-term. Over time though, the correct decisions and behavior pay off. This is a perfect model (in a simplified form) for how things like patience and morality function in the real world. They don't guarantee immediate payoffs and short-term success, they work most often and most reliably over long time frames.

Interesting. It was also suggested on this website that "magic" is also a good vehicle to teach rationality. Do you think there are any other things that can be classified as the same?

Perhaps a list of characteristics that can be used to judge an activity as a good for rationality development?

[-][anonymous]13y20

In many ways learning comes down to motivation. Especially with regards to something like rationality. For instance if a person possess beliefs which are 'sacred cows' rationality will be very threatening to them. They will not possess the level of motivation required to overcome biases. Trying to teach an individual critical thinking in these circumstances will be necessarily limited until they become motivated to confront those dogmatic and unsupported beliefs. If your goal is to raise the sanity waterline then eliminating the fears that sustain irrationality is most likely a very crucial part of such a project.

Limited, certainly. Does that negate all the benefit of trying?

Just getting the information out there is important. Compartmentalizations can be broken down in time, and it can be helped along greatly by already having the tools to do so at hand when one DOES get around to questioning. Motivation need not even factor into it--The educational waterline is still higher overall whether or not high school students themselves are particularly motivated to learn the material required. Statistically, at least A FEW of the students are going to wind up learning and remembering it.

You may find it worthwhile to consider that raising the sanity waterline is not about eliminating individual islands of irrationality at all. Those fights are already being fought. But even as we win some of those battles, the war rages on--merely with new and different players. Until you start to dealing with the roots of irrationality rather than the effects, it'll just keep popping up in the latest conspiracy theory/religion/diet fad/etc. Raising the sanity waterline is the instrument by which we're eliminating sacred cows, not the other way around.

EDIT: Of course, this is all assuming Rationality is teachable. Which I'm fairly certain it is, just not very effectively over an internet medium. I see rationality as the sort of thing that is best learned in small teams with a LOT of direct practice exercises. (The latter really necessitates the former. It would be quite ridiculous to NOT capitalize on our social natures to motivate and reinforce principles. The dark side certainly doesn't fail to.) A lot of rationality skills are, in effect, trying to retrain or refine natural impulses. That kind of change doesn't happen overnight, sleeping on what you read from a blog post.

[-][anonymous]13y30

Good points. I hadn't really considered the issue of compartmentalization. And it makes perfect sense that an individual can learn critical thinking to be applied in numerous areas of life, while neglecting this capacity in area where say fear or dogma prohibits its use elsewhere. And upon considering it, this was certainly the case in my life. I began to question authority and think for myself with regards to say government first, then later when my religious structure began to disappoint, I started to implement, for instance textual criticism and rejection of authority. And I do agree that .rationality can be taught.

Raising the sanity waterline is the instrument by which we're eliminating sacred cows, not the other way around.

You are onto a really good point there. Thanks for the redirection.

Your mention of statistics training helping reminded me of this: http://www.xkcd.com/552/ I particularly like the alt-text. I thought to submit it as a rationality quote, but Google says someone else got there a couple years back :(

The word "Bias" is often associated with the word "prejudice" which has become loaded with rather negative associations. (People don't like others to think of them as "prejudiced") Especially as I am not a native english speaker until 2 days ago (I read LW since a month) I didn't make a distinction between bias and prejudice as in my language the 2 words have more or less the same translation . How can the general public be made to associate "bias" at least partially with "cognitive bias : a pattern of poor judgment" which every human brain has and there is nothing to be ashamed of?

[This comment is no longer endorsed by its author]Reply

Charles Twardy wrote a paper (that I presented here). The impression I got was a bit more optimistic about the prospect of teaching critical thinking than from this survey.

Ah! I was planning to write a post on just that paper but I see you've done it already. Perhaps I'll write more of an argument mapping tutorial or something, now.

I would also be interested in an argument mapping tutorial. It would be an interesting meetup activity.

I would find an argument mapping tutorial useful.

To detect errors in others' reasoning is natural, to supplement people's lists of reasons to disregard others' conclusions is teachable, to write one's reasoning explicitly is habit.

[-][anonymous]13y00

I have to wonder if there may be two (or more) independent, parallel "thinking systems" in our minds. Analogous to the parallel Words and Rules systems in language that Pinker wrote about. It sometimes seems extremely difficult to think consciously about some subjects and beliefs. Not just to think about some either, but even to think about thinking about them.

[This comment is no longer endorsed by its author]Reply