A Taxonomy of Bias: Mindware Problems

14 Kaj_Sotala 07 July 2010 09:53PM

This is the third part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Gaps. Last time, I discussed the Cognitive Miser category. Today, I will discuss Mindware Problems, which has the subcategories of Mindware Gaps and Corrupted Mindware.

Mindware Problems

Stanovich defines "mindware" as "a generic label for the rules, knowledge, procedures, and strategies that a person can retrieve from memory in order to aid decision making and problem solving".

Mindware Gaps

Previously, I mentioned two tragic cases. In one, a pediatrician incorrectly testified the odds of a two children in the same family suffering infant death syndrome to be 73 million to 1. In the other, people bought into a story of "facilitated communication" helping previously non-verbal children to communicate, without looking at it in a critical manner. Stanovich uses these two as examples of a mindware gap. The people involved were lacking critical mindware: in one case, that of probabilistic reasoning, in the other, that of scientific thinking. One of the reasons why so many intelligent people can act in an irrational manner is that they're simply missing the mindware necessary for rational decision-making.

Much of the useful mindware is a matter of knowledge: knowledge of Bayes' theorem, taking into account alternative hypotheses and falsifiability, awareness of the conjunction fallacy, and so on. Stanovich also mentions something he calls strategic mindware, which refers to the disposition towards engaging the reflective mind in problem solving. These were previously mentioned as thinking dispositions, and some of them can be measured by performance-based tasks. For instance, in the Matching Familiar Figures Test (MFFT), participants are presented with a picture of an object, and told to find the correct match from an array of six other similar pictures. Reflective people have long response times and few errors, while impulsive people have short response times and numerous errors. These types of mindware are closer to strategies, tendencies, procedures, and dispositions than to knowledge structures.

Stanovich identifies mindware gaps to be involved in at least conjunction errors and ignoring base rates (missing probability knowledge), as well as the Wason selection task and confirmation bias (not considering alternate hypotheses). Incorrect lay psychological theories are identified as a combination of a mindware gap and contaminated mindware (see below). For instance, people are often blind to their own biases, because they incorrectly think that biased thinking on their part would be detectable by conscious introspection. In addition to bias blind spot, lay psychological theory is likely to be involved in errors of affective forecasting (the forecasting of one's future emotional state).

continue reading »

Assuming Nails

6 Psychohistorian 05 July 2010 10:26PM

Tangential followup to Defeating Ugh Fields in Practice.
Somewhat related to Privileging the Hypothesis.

Edited to add:
I'm surprised by negative/neutral reviews. This means that either I'm simply wrong about what counts as interesting, or I haven't expressed my point very well. Based on commenter response, I think the problem is the latter. In the next week or so, expect a much more concise version of this post that expresses my point about epistemology without the detour through a criticism of economics.

At the beginning of my last post, I was rather uncharitable to neoclassical economics:

If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance.... [to maintain that this theory is correct] is to crush reality into a theory that cannot hold it.   

Some mistook this to mean that I believe neoclassical economists honestly, explicitly believe that all people are always totally rational. But, to quote Rick Moranis, "It's not what you think. It's far, far worse." The problem is that they often take the complex framework of neoclassical economics and believe that a valid deduction within this framework is a valid deduction about the real world. However, deductions within any given framework are entirely uninformative unless the framework corresponds to reality. But, because such deductions are internally valid, we often give them far more weight than they are due. Testing the fit of a theoretical framework to reality is hard, but a valid deduction within a framework feels so very satisfying. But even if you have a fantastically engineered hammer, you cannot go around assuming everything you want to use it on is a nail. It is all too common for experts to assume that their framework applies cleanly to the real world simply because it works so well in its own world.

If this concept doesn't make perfect sense, that's what the rest of this post is about: spelling out exactly how we go wrong when we misuse the essentially circular models of many sciences, and how this matters. We will begin with the one discipline in which this problem does not occur. The one discipline which appears immune to this type of problem is mathematics, the paragon of "pure" academic disciplines. This is principally because mathematics appears to have perfect conformity with reality, with no research or experimentation needed to ensure said conformity. The entire system of mathematics exists, in a sense, in its own world. You could sit in windowless room (perhaps one with a supercomputer) and, theoretically, derive every major theorem of mathematics, given the proper axioms. The answer to the most difficult unsolved problems in mathematics was determined the moment the terms and operators within them were defined - once you say a "circle" is "a convex polygon with every point equidistant from a center," you have already determined every single digit of pi. The problem is finding out exactly how this model works - making calculations and deductions within this model. In the case of mathematics, for whatever reason, the model conforms perfectly to the real world, so any valid mathematical deduction is a valid deduction in the real world.

This is not the case in any true science, which by necessity must rely on experiment and observation. Every science operates off of some simplified model of the world, at least with our current state of knowledge. This creates two avenues of progress: discoveries within the model, which allow one to make predictions about the world, and refinements of the model, which make such predictions more accurate. If we have an internally consistent framework, theoretical manipulation within our model will never show us our error, because our model is circular and functions outside the real world. It would be like trying to predict a stock market crash by analyzing the rules of Monopoly, except that it doesn't feel absurd. There's nothing wrong with the model qua the model, the problem is with the model qua reality, and we have to look at both of them to figure that out.

Economics is one of the fields that most suffers from this problem. Our mathematician in his windowless room could generate models of international exchange rates without ever having seen currency, once we gave him the appropriate definitions and assumptions. However, when we try using these models to forecast the future, life gets complicated. No amount of experimenting within our original model will fix this without looking at the real world. At best, we come up with some equations that appear to conform to what we observe, but we run the risk that the correspondence is incidental or that there were some (temporarily) constant variables we left out that will suddenly cease to be constant and break the whole model. It is all too easy to forget that the tremendous rigor and certainty we feel when we solve the equations of our model does not translate into the real world.  Getting the "right" answer within the model is not the same thing as getting the real answer.

As an obvious practical example, an individual with a serious excess of free time could develop a model of economics which assumes that agents are rational paper-clip maximizers - that agents are rational and their ultimate concern is maximizing the number of existing paper-clips. Given even more free time and a certain amount of genius, you could even model the behaviour of irrational paper-clip maximizers, so long as you had a definition of irrational. But however refined these models are, they models will remain entirely useless unless you actually have some paper-clip maximizers whose behaviour you want to predict. And even then, you would need to evaluate your predictions after they succeed or fail. Developing a great hammer is relatively useless if the thing you need to make must be put together with screws. 

There is an obvious difference in the magnitude of this problem between the sciences, and it seems to be based on the difficulty of experimenting within them. In harder sciences where experiments are fairly straightforwards, like physics and chemistry, it is not terribly difficult to make models that conform well with reality. The bleeding edge of, say, physics, tends to like in areas that are either extremely hard to observe, like the subatomic, or extremely computation-intensive. In softer sciences, experiments are very difficult, and our models rely much more on powerful assumptions, social values, and armchair reasoning.

As humans, we are both bound and compelled to use the tools we have at our disposal. The problem here is one of uncertainty. We know that most of our assumptions in economics are empirically off, but we don't know how wrong or how much that matters when we make predictions. But the model nevertheless seeps into the very core of our model of reality itself. We cannot feel this disconnect when we try to make predictions; a well-designed model feels so complete that there is no feeling of error when we try to apply it. This is likely because we are applying it correctly, but it just doesn't apply to reality. This leads people to have high degrees of certainty and yet frequently be wrong. It would not surprise me if the failure of many experts to appreciate the model-reality gap is responsible for a large proportion of incorrect predictions.

This, unfortunately, is not the end of the problem. It gets much worse when you add a normative element into your model, when you get to call some things, "efficient" or "healthful," or "normal," or "insane." There is also a serious question as to whether this false certainty is preferable to the vague unfalsifiability of even softer social sciences. But I shall save these subjects for future posts.

 

A Taxonomy of Bias: The Cognitive Miser

52 Kaj_Sotala 02 July 2010 06:38PM

This is the second part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Problems. Today, I will discuss the Cognitive Miser category, which has the subcategories of Default to the Autonomous Mind, Serial Associative Cognition with a Focal Bias, and Override Failure.

The Cognitive Miser

Cognitive science suggests that our brains use two different kinds of systems for reasoning: Type 1 and Type 2. Type 1 is quick, dirty and parallel, and requires little energy. Type 2 is energy-consuming, slow and serial. Because Type 2 processing is expensive and can only work on one or at most a couple of things at a time, humans have evolved to default to Type 1 processing whenever possible. We are "cognitive misers" - we avoid unnecessarily spending Type 2 cognitive resources and prefer to use Type 1 heuristics, even though this might be harmful in a modern-day environment.

Stanovich further subdivides Type 2 processing into what he calls the algorithmic mind and the reflective mind. He argues that the reason why high-IQ people can fall prey to bias almost as easily as low-IQ people is that intelligence tests measure the effectiveness of the algorithmic mind, whereas many reasons for bias can be found in the reflective mind. An important function of the algorithmic mind is to carry out cognitive decoupling - to create copies of our mental representations about things, so that the copies can be used in simulations without affecting the original representations. For instance, a person wondering how to get a fruit down from a high tree will imagine various ways of getting to the fruit, and by doing so he operates on a mental concept that has been copied and decoupled from the concept of the actual fruit. Even when he imagines the things he might do to the fruit, he never confuses the fruit he has imagined in his mind with the fruit that's still hanging in the tree (the two concepts are decoupled). If he did, he might end up believing that he could get the fruit down by simply imagining himself taking it down. High performance on IQ tests indicates an advanced ability for cognitive decoupling.

In contrast, the reflective mind embodies various higher-level goals as well as thinking dispositions. Various psychological tests of thinking dispositions measure things such as the tendency to collect information before making up one's mind, the tendency to seek various points of view before coming to a conclusion, the disposition to think extensively about a problem before responding, the tendency to calibrate the degree of strength of one's opinion to the degree of evidence available, the tendency to think about future consequences before taking action, the tendency to explicitly weigh pluses and minuses of situations before making a decision, and the tendency to seek nuance and avoid absolutism. All things being equal, a high-IQ person would have a better chance of avoiding bias if they stopped to think things through, but a higher algorithmic efficiency doesn't help them if it's not in their nature to ever bother doing so. In tests of rational thinking where the subjects are explicitly instructed to consider the issue in a detached and objective manner, there's a correlation of .3 - .4 between IQ and test performance. But if such instructions are not given, and people are free to reason in a biased or unbiased way as they wish (like in real life), the correlation between IQ and rationality falls to nearly zero!

continue reading »

What Cost for Irrationality?

59 Kaj_Sotala 01 July 2010 06:25PM

This is the first part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

People who care a lot about rationality may frequently be asked why they do so. There are various answers, but I think that many of ones discussed here won't be very persuasive to people who don't already have an interest in the issue. But in real life, most people don't try to stay healthy because of various far-mode arguments for the virtue of health: instead, they try to stay healthy in order to avoid various forms of illness. In the same spirit, I present you with a list of real-world events that have been caused by failures of rationality, so that you might better persuade others of this being important.

What happens if you, or the people around you, are not rational? Well, in order from least serious to worst, you may...

Have a worse quality of living. Status Quo bias is a general human tendency to prefer the default state, regardless of whether the default is actually good or not. In the 1980's, Pacific Gas and Electric conducted a survey of their customers. Because the company was serving a lot of people in a variety of regions, some of their customers suffered from more outages than others. Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo. Yet the service difference between the groups was large: the unreliable service group suffered 15 outages per year of 4 hours' average duration and the reliable service group suffered 3 outages per year of 2 hours' average duration! (Though note caveats.)

A study by Philips Electronics found that one half of their products had nothing wrong in them, but the consumers couldn't figure out how to use the devices. This can be partially explained by egocentric bias on behalf of the engineers. Cognitive scientist Chip Heath notes that he has "a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts... and they can't imagine what it's like to be as ignorant as the rest of us."

Suffer financial harm. John Allen Paulos is a professor of mathematics at Temple University. Yet he fell prey to serious irrationality which began when he purchased WorldCom stock at $47 per share in early 2000. As bad news about the industry began mounting, WorldCom's stock price started falling - and as it did so, Paulos kept buying, regardless of accumulating evidence that he should be selling. Later on, he admitted that his "purchases were not completely rational" and that "I bought shares even though I knew better". He was still buying - partially on borrowed money - when the stock price was $5. When it momentarily rose to $7, he finally decided to sell. Unfortunately, he didn't get off from work until the market closed, and on the next market day the stock had lost a third of its value. Paulos finally sold everything, at a huge loss.

continue reading »

Are these cognitive biases, biases?

35 Kaj_Sotala 23 December 2009 05:27PM

Continuing my special report on people who don't think human reasoning is all that bad, I'll now briefly present some studies which claim that phenomena other researchers have considered signs of faulty reasoning aren't actually that. I found these from Gigerenzer (2004), which I in turn found when I went looking for further work done on the Take the Best algorithm.

Before we get to the list - what is Gigerenzer's exact claim when he lists these previous studies? Well, he's saying that minds aren't actually biased, but may make judgments that seem biased in certain environments.

Table 4.1 Twelve examples of phenomena that were first interpreted as "cognitive illusions" but later revalued as reasonable judgments given the environmental structure. [...]

The general argument is that an unbiased mind plus environmental structure (such as unsystematic error, unequal sample sizes, skewed distributions) is sufficient to produce the phenomenon. Note that other factors can also contribute to some of the phenomena. The moral is not that people would never err, but that in order to understand good and bad judgments, one needs to analyze the structure of the problem or of the natural environment.

On to the actual examples. Of the twelve examples referenced, I've included three for now.

continue reading »

Fundamentally Flawed, or Fast and Frugal?

41 Kaj_Sotala 20 December 2009 03:10PM

Whenever biases are discussed around here, it tends to happen under the following framing: human cognition is a dirty, jury-rigged hack, only barely managing to approximate the laws of probability even in a rough manner. We have plenty of biases, many of them a result of adaptations that evolved to work well in the Pleistocene, but are hopelessly broken in a modern-day environment.

That's one interpretation. But there's also a different interpretation: that a perfect Bayesian reasoner is computationally intractable, and our mental algorithms make for an excellent, possibly close to an optimal, use of the limited computational resources we happen to have available. It's not that the programming would be bad, it's simply that you can't do much better without upgrading the hardware. In the interest of fairness, I will be presenting this view by summarizing a classic 1996 Psychological Review article, "Reasoning the Fast and Frugal Way: Models of Bounded Rationality" by Gerd Gigerenzer and Daniel G. Goldstein. It begins by discussing two contrasting views: the Enlightenment ideal of the human mind as the perfect reasoner, versus the heuristics and biases program that considers human cognition as a set of quick-and-dirty heuristics.

Many experiments have been conducted to test the validity of these two views, identifying a host of conditions under which the human mind appears more rational or irrational. But most of this work has dealt with simple situations, such as Bayesian inference with binary hypotheses, one single piece of binary data, and all the necessary information conveniently laid out for the participant (Gigerenzer & Hoffrage, 1995). In many real-world situations, however, there are multiple pieces of information, which are not independent, but redundant. Here, Bayes’ theorem and other “rational” algorithms quickly become mathematically complex and computationally intractable, at least for ordinary human minds. These situations make neither of the two views look promising. If one would apply the classical view to such complex real-world environments, this would suggest that the mind is a supercalculator like a Laplacean Demon (Wimsatt, 1976)— carrying around the collected works of Kolmogoroff, Fisher, or Neyman—and simply neds a memory jog, like the slave in Plato’s Meno. On the other hand, the heuristics-and-biases view of human irrationality would lead us to believe that humans are hopelessly lost in the face of real-world complexity, given their supposed inability to reason according to the canon of classical rationality, even in simple laboratory experiments.

There is a third way to look at inference, focusing on the psychological and ecological rather than on logic and probability theory. This view questions classical rationality as a universal norm and thereby questions the very definition of “good” reasoning on which both the Enlightenment and the heuristics-and-biases views were built. Herbert Simon, possibly the best-known proponent of this third view, proposed looking for models of bounded rationality instead of classical rationality. Simon (1956, 1982) argued that information-processing systems typically need to satisfice rather than optimize. Satisficing, a blend of sufficing and satisfying, is a word of Scottish origin, which Simon uses to characterize algorithms that successfully deal with conditions of limited time, knowledge, or computational capacities. His concept of satisficing postulates, for instance, that an organism would choose the first object (a mate, perhaps) that satisfies its aspiration level—instead of the intractable sequence of taking the time to survey all possible alternatives, estimating probabilities and utilities for the possible outcomes associated with each alternative, calculating expected utilities, and choosing the alternative that scores highest.

continue reading »

The persuasive power of false confessions

10 matt 11 December 2009 01:54AM

First paragraph from a Mind Hacks post:

The APS Observer magazine has a fantastic article on the power of false confessions to warp our perception of other evidence in a criminal case to the point where expert witnesses will change their judgements of unrelated evidence to make it fit the false admission of guilt.

The post and linked article are worth reading… and I don't have much to add.

You Be the Jury: Survey on a Current Event

31 komponisto 09 December 2009 04:25AM

As many of you probably know, in an Italian court early last weekend, two young students, Amanda Knox and Raffaele Sollecito, were convicted of killing another young student, Meredith Kercher, in a horrific way in November of 2007. (A third person, Rudy Guede, was convicted earlier.)

If you aren't familiar with the case, don't go reading about it just yet. Hang on for just a moment.

If you are familiar, that's fine too. This post is addressed to readers of all levels of acquaintance with the story.

What everyone should know right away is that the verdict has been extremely controversial. Strong feelings have emerged, even involving national tensions (Knox is American, Sollecito Italian, and Kercher British, and the crime and trial took place in Italy). The circumstances of the crime involve sex. In short, the potential for serious rationality failures in coming to an opinion on a case like this is enormous.  

Now, as it happens, I myself have an opinion. A rather strong one, in fact. Strong enough that I caught myself thinking that this case -- given all the controversy surrounding it -- might serve as a decent litmus test in judging the rationality skills of other people. Like religion, or evolution -- except less clichéd (and cached) and more down-and-dirty.

Of course, thoughts like that can be dangerous, as I quickly recognized. The danger of in-group affective spirals looms large. So before writing up that Less Wrong post adding my-opinion-on-the-guilt-or-innocence-of-Amanda-Knox-and-Raffaele-Sollecito to the List of Things Every Rational Person Must Believe, I decided it might be useful to find out what conclusion(s) other aspiring rationalists would (or have) come to (without knowing my opinion).

So that's what this post is: a survey/experiment, with fairly specific yet flexible instructions (which differ slightly depending on how much you know about the case already).

continue reading »

The Danger of Stories

9 Matt_Simpson 08 November 2009 02:53AM

Tyler Cowen argues in a TED talk (~15 min) that stories pervade our mental lives.  He thinks they are a major source of cognitive biases and, on the margin, we should be more suspicious of them - especially simple stories.  Here's an interesting quote about the meta-level:

What story do you take away from Tyler Cowen?  ...Another possibility is you might tell a story of rebirth.  You might say, "I used to think too much in terms of stories, but then I heard Tyler Cowen, and now I think less in terms of stories". ...You could also tell a story of deep tragedy.  "This guy Tyler Cowen came and he told us not to think in terms of stories, but all he could do was tell us stories about how other people think too much in terms of stories."

Mathematical simplicity bias and exponential functions

12 taw 26 August 2009 06:34PM

One of biases that are extremely prevalent in science, but are rarely talked about anywhere, is bias towards models that are mathematically simple and easier to operate on. Nature doesn't care all that much for mathematical simplicity. In particular I'd say that as a good first approximation, if you think something fits exponential function of either growth or decay, you're wrong. We got so used to exponential functions and how convenient they are to work with, that we completely forgot the nature doesn't work that way.

But what about nuclear decay, you might be asking now... That's as close you get to real exponential decay as you get... and it's not nowhere close enough. Well, here's a log-log graph of Chernobyl release versus theoretical exponential function, plotted in log-log.

Well, that doesn't look all that exponential... The thing is that even if you have perfect exponential decay processes as with single nucleotide decay, when you start mixing a heterogeneous group of such processes, the exponential character is lost. Early in time faster-decaying cases dominate, then gradually those that decay more slowly, somewhere along the way you might have to deal with results of decay (pure depleted uranium gets more radioactive with time at first, not less, as it decays into low half-life nuclides), and perhaps even some processes you didn't have to consider (like creation of fresh radioactive nuclides via cosmic radiation).

And that's the ideal case of counting how much radiation a sample produces, where the underlying process is exponential by the basic laws of physics - it still gets us orders of magnitude wrong. When you're measuring something much more vague, and with much more complicated underlying mechanisms, like changes in population, economy, or processing power.

According to IMF, world economy in 2008 was worth 69 trillion $ PPP. Assuming 2% annual growth and naive growth models, the entire world economy produces 12 cents PPP worth of value in entire first century. And assuming fairly stable population, an average person in 3150 will produce more that the entire world does now. And with enough time dollar value of one hydrogen atom will be higher than current dollar value of everything on Earth. And of course with proper time discounting of utility, life of one person now is worth more than half of humanity millennium into the future - exponential growth and exponential decay are both equally wrong.

To me they all look like clear artifacts of our growth models, but there are people who are so used to them that they treat predictions like that seriously.

In case you're wondering, here are some estimates of past world GDP.

View more: Prev | Next