Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

What Intelligence Tests Miss: The psychology of rational thought

36 Kaj_Sotala 11 July 2010 11:01PM

This is the fourth and final part in a mini-sequence presenting Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought.

If you want to give people a single book to introduce people to the themes and ideas discussed on Less Wrong, What Intelligence Tests Miss is probably the best currenty existing book for doing so. It does have a somewhat different view on the study of bias than we on LW: while Eliezer concentrated on the idea of the map and the territory and aspiring to the ideal of a perfect decision-maker, Stanovich's perspective is more akin to bias as a thing that prevents people from taking full advantage of their intelligence. Regardless, for someone less easily persuaded by LW's somewhat abstract ideals, reading Stanovich's concrete examples first and then proceeding to the Sequences is likely to make the content presented in the sequences much more interesting. Even some of our terminology such as "carving reality at the joints" and the instrumental/epistemic rationality distinction will be more familiar to somebody who was first read What Intelligence Tests Miss.

Below is a chapter-by-chapter summary of the book.

Inside George W. Bush's Mind: Hints at What IQ Tests Miss is a brief introductory chapter. It starts with the example of president George W. Bush, mentioning that the president's opponents frequently argued against his intelligence, and even his supporters implicitly conceded the point by arguing that even though he didn't have "school smarts" he did have "street smarts". Both groups were purportedly surprised when it was revealed that the president's IQ was around 120, roughly the same as his 2004 presidential candidate opponent John Kerry. Stanovich then goes on to say that this should not be surprising, for IQ tests do not tap into the tendency to actually think in an analytical manner, and that IQ had been overvalued as a concept. For instance, university admissions frequently depend on tests such as the SAT, which are pretty much pure IQ tests. The chapter ends by a disclaimer that the book is not an attempt to say that IQ tests measure nothing important, or that there would be many kinds of intelligence. IQ does measure something real and important, but that doesn't change the fact that people overvalue it and are generally confused about what it actually does measure.

Dysrationalia: Separating Rationality and Intelligence talks about the phenomenon informally described as "smart but acting stupid". Stanovich notes that if we used a broad definition of intelligence, where intelligence only meant acting in an optimal manner, then this expression wouldn't make any sense. Rather, it's a sign that people are intuitively aware of IQ and rationality as measuring two separate qualities. Stanovich then brings up the concept of dyslexia, which the DSM IV defines as "reading achievement that falls substantially below that expected given the individual's chronological age, measured intelligence, and age-appropriate education". Similarly, the diagnostic criterion for mathematics disorder (dyscalculia) is "mathematical ability that falls substantially below that expected for the individual's chronological age, measured intelligence, and age-appropriate education". He argues that since we have a precedent for creating new disability categories when someone's ability in an important skill domain is below what would be expected for their intelligence, it would make sense to also have a category for "dysrationalia":

Dysrationalia is the inability to think and behave rationally despite adequate intelligence. It is a general term that refers to a heterogenous group of disorders manifested by significant difficulties in belief formation, in the assessment of belief consistency, and/or in the determination of action to achieve one's goals. Although dysrationalia may occur concomitantly with other handicapping conditions (e.g. sensory impairment), dysrationalia is not the result of those conditions. The key diagnostic criterion for dysrationalia is a level of rationality, as demonstrated in thinking and behavior, that is significantly below the level of the individual's intellectual capacity (as determined by an individually administered IQ test).

continue reading »

A Taxonomy of Bias: Mindware Problems

15 Kaj_Sotala 07 July 2010 09:53PM

This is the third part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Gaps. Last time, I discussed the Cognitive Miser category. Today, I will discuss Mindware Problems, which has the subcategories of Mindware Gaps and Corrupted Mindware.

Mindware Problems

Stanovich defines "mindware" as "a generic label for the rules, knowledge, procedures, and strategies that a person can retrieve from memory in order to aid decision making and problem solving".

Mindware Gaps

Previously, I mentioned two tragic cases. In one, a pediatrician incorrectly testified the odds of a two children in the same family suffering infant death syndrome to be 73 million to 1. In the other, people bought into a story of "facilitated communication" helping previously non-verbal children to communicate, without looking at it in a critical manner. Stanovich uses these two as examples of a mindware gap. The people involved were lacking critical mindware: in one case, that of probabilistic reasoning, in the other, that of scientific thinking. One of the reasons why so many intelligent people can act in an irrational manner is that they're simply missing the mindware necessary for rational decision-making.

Much of the useful mindware is a matter of knowledge: knowledge of Bayes' theorem, taking into account alternative hypotheses and falsifiability, awareness of the conjunction fallacy, and so on. Stanovich also mentions something he calls strategic mindware, which refers to the disposition towards engaging the reflective mind in problem solving. These were previously mentioned as thinking dispositions, and some of them can be measured by performance-based tasks. For instance, in the Matching Familiar Figures Test (MFFT), participants are presented with a picture of an object, and told to find the correct match from an array of six other similar pictures. Reflective people have long response times and few errors, while impulsive people have short response times and numerous errors. These types of mindware are closer to strategies, tendencies, procedures, and dispositions than to knowledge structures.

Stanovich identifies mindware gaps to be involved in at least conjunction errors and ignoring base rates (missing probability knowledge), as well as the Wason selection task and confirmation bias (not considering alternate hypotheses). Incorrect lay psychological theories are identified as a combination of a mindware gap and contaminated mindware (see below). For instance, people are often blind to their own biases, because they incorrectly think that biased thinking on their part would be detectable by conscious introspection. In addition to bias blind spot, lay psychological theory is likely to be involved in errors of affective forecasting (the forecasting of one's future emotional state).

continue reading »

A Taxonomy of Bias: The Cognitive Miser

52 Kaj_Sotala 02 July 2010 06:38PM

This is the second part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Problems. Today, I will discuss the Cognitive Miser category, which has the subcategories of Default to the Autonomous Mind, Serial Associative Cognition with a Focal Bias, and Override Failure.

The Cognitive Miser

Cognitive science suggests that our brains use two different kinds of systems for reasoning: Type 1 and Type 2. Type 1 is quick, dirty and parallel, and requires little energy. Type 2 is energy-consuming, slow and serial. Because Type 2 processing is expensive and can only work on one or at most a couple of things at a time, humans have evolved to default to Type 1 processing whenever possible. We are "cognitive misers" - we avoid unnecessarily spending Type 2 cognitive resources and prefer to use Type 1 heuristics, even though this might be harmful in a modern-day environment.

Stanovich further subdivides Type 2 processing into what he calls the algorithmic mind and the reflective mind. He argues that the reason why high-IQ people can fall prey to bias almost as easily as low-IQ people is that intelligence tests measure the effectiveness of the algorithmic mind, whereas many reasons for bias can be found in the reflective mind. An important function of the algorithmic mind is to carry out cognitive decoupling - to create copies of our mental representations about things, so that the copies can be used in simulations without affecting the original representations. For instance, a person wondering how to get a fruit down from a high tree will imagine various ways of getting to the fruit, and by doing so he operates on a mental concept that has been copied and decoupled from the concept of the actual fruit. Even when he imagines the things he might do to the fruit, he never confuses the fruit he has imagined in his mind with the fruit that's still hanging in the tree (the two concepts are decoupled). If he did, he might end up believing that he could get the fruit down by simply imagining himself taking it down. High performance on IQ tests indicates an advanced ability for cognitive decoupling.

In contrast, the reflective mind embodies various higher-level goals as well as thinking dispositions. Various psychological tests of thinking dispositions measure things such as the tendency to collect information before making up one's mind, the tendency to seek various points of view before coming to a conclusion, the disposition to think extensively about a problem before responding, the tendency to calibrate the degree of strength of one's opinion to the degree of evidence available, the tendency to think about future consequences before taking action, the tendency to explicitly weigh pluses and minuses of situations before making a decision, and the tendency to seek nuance and avoid absolutism. All things being equal, a high-IQ person would have a better chance of avoiding bias if they stopped to think things through, but a higher algorithmic efficiency doesn't help them if it's not in their nature to ever bother doing so. In tests of rational thinking where the subjects are explicitly instructed to consider the issue in a detached and objective manner, there's a correlation of .3 - .4 between IQ and test performance. But if such instructions are not given, and people are free to reason in a biased or unbiased way as they wish (like in real life), the correlation between IQ and rationality falls to nearly zero!

continue reading »

What Cost for Irrationality?

60 Kaj_Sotala 01 July 2010 06:25PM

This is the first part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

People who care a lot about rationality may frequently be asked why they do so. There are various answers, but I think that many of ones discussed here won't be very persuasive to people who don't already have an interest in the issue. But in real life, most people don't try to stay healthy because of various far-mode arguments for the virtue of health: instead, they try to stay healthy in order to avoid various forms of illness. In the same spirit, I present you with a list of real-world events that have been caused by failures of rationality, so that you might better persuade others of this being important.

What happens if you, or the people around you, are not rational? Well, in order from least serious to worst, you may...

Have a worse quality of living. Status Quo bias is a general human tendency to prefer the default state, regardless of whether the default is actually good or not. In the 1980's, Pacific Gas and Electric conducted a survey of their customers. Because the company was serving a lot of people in a variety of regions, some of their customers suffered from more outages than others. Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo. Yet the service difference between the groups was large: the unreliable service group suffered 15 outages per year of 4 hours' average duration and the reliable service group suffered 3 outages per year of 2 hours' average duration! (Though note caveats.)

A study by Philips Electronics found that one half of their products had nothing wrong in them, but the consumers couldn't figure out how to use the devices. This can be partially explained by egocentric bias on behalf of the engineers. Cognitive scientist Chip Heath notes that he has "a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts... and they can't imagine what it's like to be as ignorant as the rest of us."

Suffer financial harm. John Allen Paulos is a professor of mathematics at Temple University. Yet he fell prey to serious irrationality which began when he purchased WorldCom stock at $47 per share in early 2000. As bad news about the industry began mounting, WorldCom's stock price started falling - and as it did so, Paulos kept buying, regardless of accumulating evidence that he should be selling. Later on, he admitted that his "purchases were not completely rational" and that "I bought shares even though I knew better". He was still buying - partially on borrowed money - when the stock price was $5. When it momentarily rose to $7, he finally decided to sell. Unfortunately, he didn't get off from work until the market closed, and on the next market day the stock had lost a third of its value. Paulos finally sold everything, at a huge loss.

continue reading »