(The post is written for beginners. Send the link to your friends! Regular Less Wrong readers may want to jump to the Stanovich material.)

The last 40 years of cognitive science have taught us a great deal about how our brains produce errors in thinking and decision making, and about how we can overcome those errors. These methods can help us form more accurate beliefs and make better decisions.

 

Long before the first Concorde supersonic jet was completed, the British and French governments developing it realized it would lose money. But they continued to develop the jet when they should have cut their losses, because they felt they had "invested too much to quit"1 (sunk cost fallacy2).

John tested positive for an extremely rare but fatal disease, using a test that is accurate 80% of the time. John didn't have health insurance, and the only available treatment — which his doctor recommended — was very expensive. John agreed to the treatment, his retirement fund was drained to nothing, and during the treatment it was discovered that John did not have the rare disease after all. Later, a statistician explained to John that because the disease is so rare, the chance that he had had the disease even given the positive test was less than one in a million. But neither John's brain nor his doctor's brain had computed this correctly (base rate neglect).

Mary gave money to a charity to save lives in the developing world. Unfortunately, she gave to a charity that saves lives at a cost of $100,000 per life instead of one that saves lives at 1/10th that cost, because the less efficient charity used a vivid picture of a starving child on its advertising, and our brains respond more to single, identifiable victims than to large numbers of victims (identifiability effect3 and scope insensitivity4).

During the last four decades, cognitive scientists have discovered a long list of common thinking errors like these. These errors lead us to false beliefs and poor decisions.

How are these errors produced, and how can we overcome them? Vague advice like "be skeptical" and "think critically" may not help much. Luckily, cognitive scientists know a great deal about the mathematics of correct thinking, how thinking errors are produced, and how we can overcome these errors in order to live more fulfilling lives.

 

Rationality

First, what is rationality? It is not the same thing as intelligence, because even those with high intelligence fall prey to some thinking errors as often as everyone else.5 But then, what is rationality?

Cognitive scientists recognize two kinds of rationality:

  • Epistemic rationality is about forming true beliefs, about getting the map in your head to accurately reflect the territory of the world. We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs.
  • Instrumental rationality is about making decisions that are well-aimed at bringing about what you want. Due to habit and bias, many of our decisions don't actually align with our goals. We can measure instrumental rationality with a variety of techniques developed in economics, for example testing whether a person obeys the 'axioms of choice'.6

In short, rationality improves our choices concerning what to believe and what to do.

Unfortunately, human irrationality is quite common, as shown in popular books like Predictably Irrational: The Hidden Forces that Shape Our Decisions and Kluge: The Haphazard Evolution of the Human Mind.

Ever since Aristotle spoke of humans as the "rational animal," we've had a picture of ourselves as rational beings that are hampered by shortcomings like anger and fear and confirmation bias.

Cognitive science says just the opposite. Cognitive science shows us that humans just are a collection of messy little modules like anger and fear and the modules that produce confirmation bias. We have a few modules for processing logic and probability and rational goal-pursuit, but they are slow and energy-expensive and rarely used.

As we'll see, our brains avoid using these expensive modules whenever possible. Pete Richerson and Robert Boyd explain:

...all animals are under stringent selection pressure to be as stupid as they can get away with.7

Or, as philosopher David Hull put it:

The rule that human beings seem to follow is to engage [rational thought] only when all else fails — and usually not even then.8

 

Human reasoning

So how does human reasoning work, and why does it so often produce mistaken judgments and decisions?

Today, cognitive scientists talk about two kinds of processes, what Daniel Kahneman (2011) calls "fast and slow" processes:

  • Type 1 processes are fast, do not require conscious attention, do not need input from conscious processes, and can operate in parallel.
  • Type 2 processes are slow, require conscious effort, and generally only work one at a time.

Type 1 processes provide judgments quickly, but these judgments are often wrong, and can be overridden by corrective Type 2 processes.

Type 2 processes are computationally expensive, and thus humans are 'cognitive misers'. This means that we (1) default to Type 1 processes whenever possible, and (2) when we must use Type 2 processes, we use the least expensive kinds of Type 2 processes, those with a 'focal bias' — a disposition to reason from the simplest model available instead of considering all the relevant factors. Hence, we are subject to confirmation bias (our cognition is focused on what we already believe) and other biases.

So, cognitive miserliness can cause three types of thinking errors:

  1. We default to Type 1 processes when Type 2 processes are needed.
  2. We fail to override Type 1 processes with Type 2 processes.
  3. Even when we override with Type 2 processes, we use Type 2 processes with focal bias.

But the problem gets worse. If someone is going to override Type 1 processes with Type 2 processes, then she also needs the right content available with which to do the overriding. For example, she may need to override a biased intuitive judgment with a correct application of probability theory, or a correct application of deductive logic. Such tools are called 'mindware'.9

Thus, thinking can also go wrong if there is a 'mindware gap' — that is, if an agent lacks crucial mindware like probability theory.

Finally, thinking can go wrong due to 'contaminated mindware' — mindware that exists but is wrong. For example, an agent may have the naive belief that they know their own minds quite well, which is false. Such mistaken mindware can lead to mistaken judgments.

 

Types of errors

Given this understanding, a taxonomy of thinking errors could begin like this:10

The circles on the left capture the three normal sources of thinking errors. The three rectangles to the right of 'Cognitive Miserliness' capture the three categories of error that can be caused by cognitive miserliness. The rounded rectangles to the right of 'Mindware Gap' and 'Corrupted Mindware' propose some examples of (1) mindware that, if missing, can cause a mindware gap, and (2) common contaminated mindware.

The process for solving a reasoning task, then, may look something like this:11

First, do I have mindware available to solve the reasoning problem before me with slow, deliberate, Type 2 processes? If not, my brain must use fast but inaccurate Type 1 processes to solve the problem. If I do have mindware available to solve this problem, do I notice the need to engage it? If not, my brain defaults to the cheaper Type 1 processes. If I do notice the need to engage Type 2 processes and have the necessary mindware, is sustained (as opposed to momentary) 'Type 2 override' required to solve the problem? If not, then I use that mindware to solve the problem. If sustained override is required to solve the reasoning problem and I don't have the cognitive capacity (e.g. working memory) needed to complete the override, then my brain will default back to Type 1 processes. Otherwise, I'll use my cognitive capacities to sustain Type 2 override well enough to complete the reasoning task with my Type 2 processes (mindware).

That may be something like how our brains determine how to solve a reasoning task.

It's this model that Stanovich and colleagues (2010) use to explain why, among other things, IQ is correlated with performance on some reasoning tasks but not others. For example, IQ correlates with performance on tests of outcome bias and hindsight bias, but not with performance on tests of anchoring effects and omission bias. To overcome these latter biases, subjects seem to need not just high cognitive capacity (fluid intelligence, working memory, etc.), but also specific rationality training.

If this is right, then we may talk of three different 'minds' at work in solving reasoning problems:

  • The autonomous mind, made of unconscious Type 1 processes. There are few individual differences in its operation.
  • The algorithmic mind, made of conscious Type 2 processes. There are significant individual differences in fluid intelligence in particular and cognitive capacity in general — that is, differences in perceptual speed, discrimination accuracy, working memory capacity, and the efficiency of the retrieval of information stored in long-term memory.12
  • The reflective mind, which shows individual differences in the disposition to use rationality mindware — the disposition to generate alternative hypotheses, to use fully disjunctive reasoning, to engage in actively open-minded thinking, etc.13

 

Rationality Skills

But it is not enough to understand how the human brain produces thinking errors. We also must find ways to meliorate the problem if we want to have more accurate beliefs and more efficiently achieve our goals. As Milkman et al. (2010) say:

...the time has come to move the study of biases in judgment and decision making beyond description and toward the development of improvement strategies.

Stanovich (2009) sums up our project:

To jointly achieve epistemic and instrumental rationality, a person must display judicious decision making, adequate behavioral regulation, wise goal prioritization, sufficient thoughtfulness, and proper evidence calibration. For example, epistemic rationality — beliefs that are properly matched to the world — requires probabilistic reasoning and the ability to calibrate theories to evidence. Instrumental rationality — maximizing goal fulfillment — requires adherence to all of the axioms of rational choice. People fail to fulfill the many different strictures of rational thought because they are cognitive misers, because they lack critical mindware, and because they have acquired contaminated mindware. These errors can be prevented by acquiring the mindware of rational thought and the thinking dispositions that prevent the overuse of the strategies of the cognitive miser.

This is the project of 'debiasing' ourselves14 with 'ameliorative psychology'.15

What we want is a Rationality Toolkit: a set of skills and techniques that can be used to overcome and correct the errors of our primate brains so we can form more accurate beliefs and make better decisions.

Our goal is not unlike Carl Sagan's 'Baloney Detection Kit', but the tools in our Rationality Toolkit will be more specific and better grounded in the cognitive science of rationality.

I mentioned some examples of debiasing interventions that have been tested by experimental psychologists in my post Is Rationality Teachable? I'll start with those, then add a few techniques for ameliorating the planning fallacy, and we've got the beginnings of our Rationality Toolkit:

  1. A simple instruction to "think about alternatives" can promote resistance to overconfidence and confirmation bias. In one study, subjects asked to generate their own hypotheses are more responsive to their accuracy than subjects asked to choose from among pre-picked hypotheses.16 Another study required subjects to list reasons for and against each of the possible answers to each question on a quiz prior to choosing an answer and assessing the probability of its being correct. This process resulted in more accurate confidence judgments relative to a control group.17 
  2. Training in microeconomics can help subjects avoid the sunk cost fallacy.18 
  3. Because people avoid the base rate fallacy more often when they encounter problems phrased in terms of frequencies instead of probabilities,19 teaching people to translate probabilistic reasoning tasks into frequency formats improves their performance.20 
  4. Warning people about biases can decrease their prevalence. So far, this has been demonstrated to work with regard to framing effects,21 hindsight bias,22 and the outcome effect,23 though attempts to mitigate anchoring effects by warning people about them have produced weak results so far.24
  5. Research on the planning fallacy suggests that taking an 'outside view' when predicting the time and resources required to complete a task will lead to better predictions. A specific instance of this strategy is 'reference class forecasting',25 in which planners project time and resource costs for a project by basing their projections on the outcomes of a distribution of comparable projects.
  6. Unpacking the components involved in a large task or project helps people to see more clearly how much time and how many resources will be required to complete it, thereby partially meliorating the planning fallacy.26 
  7. One reason we fall prey to the planning fallacy is that we do not remain as focused on the task at hand throughout its execution as when we are planning its execution. The planning fallacy can be partially meliorated, then, not only by improving the planning but by improving the execution. For example, in one study27 students were taught to imagine themselves performing each of the steps needed to complete a project. Participants rehearsed these simulations each day. 41% of these students completed their tasks on time, compared to 14% in a control group.

But this is only the start. We need more rationality skills, and we need step-by-step instructions for how to teach them and how to implement them at the 5-second level.

 

Notes

1 Teger (1980).

2 A sunk cost is a cost from the past that cannot be recovered. Because decision makers should consider only the future costs and benefits of the choices before them, sunk costs should be irrelevant to human decisions. Alas, sunk costs regularly do effect human decisions: Knox & Inkster (1968); Arkes & Blumer (1985); Arkes & Ayton (1999); Arkes & Hutzel (2000); Staw (1976); Whyte (1986).

3 People are more generous (say, in giving charity) toward a single identifiable victim than toward unidentifiable or statistical victims (Kogut & Ritov 2005a, 2010; Jenni & Loewenstein 1997; Small & Loewenstein 2003; Small et al. 2007; Slovic 2007), even though they say they prefer to give to a group of people (Kogut & Ritov 2005b).

4 Yudkowsky summarizes scope insensitivity:

Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [Desvousges et al. 1992]. This is scope insensitivity or scope neglect: the number of birds saved — the scope of the altruistic action — had little effect on willingness to pay.

See also: Kahneman (1986); McFadden & Leonard (1995); Carson & Mitchell (1995); Fetherstonhaugh et al. (1997); Slovic et al. (2011).

5 Stanovich & West (2008); Ross et al. (1977); Krueger (2000).

6 Stanovich et al. (2008) write:

Cognitive scientists recognize two types of rationality: instrumental and epistemic... [We] could characterize instrumental rationality as the optimization of the individual’s goal fulfillment. Economists and cognitive scientists have refined the notion of optimization of goal fulfillment into the technical notion of expected utility. The model of rational judgment used by decision scientists is one in which a person chooses options based on which option has the largest expected utility...

The other aspect of rationality studied by cognitive scientists is termed epistemic rationality. This aspect of rationality concerns how well beliefs map onto the actual structure of the world. Instrumental and epistemic rationality are related. The aspect of beliefs that enter into instrumental calculations (i.e., tacit calculations) are the probabilities of states of affairs in the world.

Also see the discussion in Stanovich et al. (2011). On instrumental rationality as the maximization of expected utility, see Dawes (1998); Hastie & Dawes (2009); Wu et al. (2004). On epistemic reality, see Foley (1987); Harman (1995); Manktelow (2004); Over (2004).

How can we measure an individual's divergence from expected utility maximization if we can't yet measure utility directly? One of the triumphs of decision science is the demonstration that agents whose behavior respects the so-called 'axioms of choice' will behave as if they are maximizing expected utility. It can be difficult to measure utility, but it is easier to measure whether one of the axioms of choice are being violated, and thus whether an agent is behaving instrumentally irrationally.

Violations of both instrumental and epistemic rationality have been catalogued at length by cognitive psychologists in the 'heuristics and biases' literature: Baron (2007); Evans (1989, 2007); Gilovich et al. (2002); Kahneman & Tversky (2000); Shafir & LeBoeuf (2002); Stanovich (1999). For the argument against comparing human reasoning practice with normative reasoning models, see Elqayam & Evans (2011).

7 Boyd & Richerson (2005), p. 135.

8 Hull (2000), p. 37.

9 Perkins (1995).

10 Adapted from Stanovich et al. (2008).

11 Adapted from Stanovich et al. (2010).

12 Ackerman et al. (1999); Deary (2000, 2001); Hunt (1987, 1999); Kane & Engle (2002); Lohman (2000); Sternberg (1985, 1997, 2003); Unsworth & Engle (2005).

13 See table 17.1 in Stanovich et al. (2010). The image is from Stanovich (2010).

14 Larrick (2004).

15 Bishop & Trout (2004).

16 Koehler (1994).

17 Koriat et al. (1980). Also see Soll & Klayman (2004); Mussweiler et al. (2000).

18 Larrick et al. (1990).

19 Gigerenzer & Hoffrage (1995).

20 Sedlmeier (1999).

21 Cheng & Wu (2010).

22 Hasher et al. (1981); Reimers & Butler (1992).

23 Clarkson et al. (2002).

24 Block & Harper (1991); George et al. (2000).

25 Lovallo & Kahneman (2003); Buehler et al. (2010); Flyvbjerg (2008); Flyvbjerg et al. (2009).

26 Connolly & Dean (1997); Forsyth & Burt (2008); Kruger & Evans (2004).

27 Taylor et al. (1988). See also Koole & Vant Spijker (2000).

 

References

Ackerman, Kyllonen & Richards, eds. (1999). Learning and individual differences: Process, trait, and content determinants. American Psychological Association.

Arkes & Blumer (1985). The psychology of sunk cost. Organizational Behavior and Human Decision Processes, 35: 124-140.

Arkes & Ayton (1999). The sunk cost and Concorde effects: Are humans less rational than lower animals? Psychological Bulletin, 125: 591-600.

Arkes & Hutzel (2000). The role of probability of success estimates in the sunk cost effect. Journal of Behavioral Decision Making, 13: 295-306.

Baron (2007). Thinking and Deciding, 4th edition. Cambridge University Press.

Bishop & Trout (2004). Epistemology and the Psychology of Human Judgment. Oxford University Press.

Block & Harper (1991). Overconfidence in estimation: testing the anchoring-and-adjustment hypothesis. Organizational Behavior and Human Decision Processes, 49: 188–207.

Buehler, Griffin, & Ross (1994). Exploring the 'planning fallacy': Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67: 366-381.

Buehler, Griffin, & Ross (1995). It's about time: Optimistic predictions in work and love. European Review of Social Psychology, 6: 1-32.

Buehler, Griffin, & Ross (2002). Inside the planning fallacy: The causes and consequences of optimistic time predictions. In Gilovich, Griffin, & Kahneman (eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 250-270). Cambridge University Press.

Buehler, Griffin, & Peetz (2010). The planning fallacy: cognitive, motivational, and social originsAdvances in Experimental Social Psychology, 43: 1-62.

Carson & Mitchell (1995). Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28: 155-73.

Cheng & Wu (2010). Debiasing the framing effect: The effect of warning and involvement. Decision Support Systems, 49: 328-334.

Clarkson, Emby, & Watt (2002). Debiasing the effect of outcome knowledge: the role of instructions in an audit litigation setting. Auditing: A Journal of Practice and Theory, 21: 1–14.

Connolly & Dean (1997). Decomposed versus holistic estimates of effort required for software writing tasks. Management Science, 43: 1029–1045.

Dawes (1998). Behavioral decision making and judgment. In Gilbert, Fiske, & Lindzey (eds.), The handbook of social psychology (Vol. 1, pp. 497–548). McGraw-Hill.

Deary (2000). Looking down on human intelligence: From psychometrics to the brain. Oxford University Press.

Deary (2001). Intelligence: A very short introduction. Oxford University Press.

Desvousges, Johnson, Dunford, Boyle, Hudson, & Wilson (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.

Elqayam & Evans (2011). Subtracting 'ought' from 'is': Descriptivism versus normativism in the study of human thinking. Brain and Behavioral Sciences.

Evans (1989). Bias in Human Reasoning: Causes and Consequences. Lawrence Erlbaum Associates.

Evans (2007). Hypothetical Thinking: Dual Processes in Reasoning and Judgment. Psychology Press.

Fetherstonhaugh, Slovic, Johnson, & Friedrich (1997). Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.

Flyvbjerg (2008). Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice. European Planning Studies, 16: 3–21.

Flyvbjerg, Garbuio, & Lovallo (2009). Delusion and deception in large infrastructure projects: Two models for explaining and preventing executive disaster. California Management Review, 51: 170–193.

Foley (1987). The Theory of Epistemic Rationality. Harvard University Press.

Forsyth & Burt (2008). Allocating time to future tasks: The effect of task segmentation on planning fallacy bias. Memory and Cognition, 36: 791–798.

George, Duffy, & Ahuja (2000). Countering the anchoring and adjustment bias with decision support systems. Decision Support Systems, 29: 195–206.

Gigerenzer & Hoffrage (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102: 684–704.

Gilovich, Griffin, & Kahneman (eds.) (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press.

Harman (1995). Rationality. In Smith & Osherson (eds.), Thinking (Vol. 3, pp. 175–211). MIT Press.

Hasher, Attig, & Alba (1981). I knew it all along: or did I? Journal of Verbal and Learning Behavior, 20: 86-96.

Hastie & Dawes (2009). Rational Choice in an Uncertain World, 2nd edition. Sage.

Hull (2000). Science and selection: Essays on biological evolution and the philosophy of science. Cambridge University Press.

Hunt (1987). The next word on verbal ability. In Vernon (ed.), Speed of information-processing and intelligence (pp. 347–392). Ablex.

Hunt (1999). Intelligence and human resources: Past, present, and future. In Ackerman & Kyllonen (Eds.), The future of learning and individual differences research: Processes, traits, and content (pp. 3-30) American Psychological Association.

Jenni & Loewenstein (1997). Explaining the 'identifiable victim effect.' Journal of Risk and Uncertainty, 14: 235–257.

Kahneman (1986). Comments on the contingent valuation method. In Cummings, Brookshie, & Schulze (eds.), Valuing environmental goods: a state of the arts assessment of the contingent valuation method. Roweman and Allanheld.

Kahneman & Tversky (2000). Choices, Values, and Frames. Cambridge University Press.

Kahneman (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

Kane & Engle (2002). The role of prefrontal cortex working-memory capacity, executive attention, and general fluid intelligence: An individual differences perspective. Psychonomic Bulletin and Review, 9: 637–671.

Knox & Inkster (1968). Postdecision dissonance at post time. Journal of Personality and Social Psychology, 8: 319-323.

Koehler (1994). Hypothesis generation and confidence in judgment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20: 461-469.

Kogut & Ritov (2005a). The 'identified victim effect': An identified group, or just a single individual? Journal of Behavioral Decision Making, 18: 157–167.

Kogut & Ritov (2005b). The singularity effect of identified victims in separate and joint evaluations. Organizational Behavior and Human Decision Processes, 97: 106–116.

Kogut & Ritov (2010). The identifiable victim effect: Causes and boundary conditions. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 133-146). Psychology Press.

Koriat, Lichtenstein, & Fischhoff (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6: 107-118.

Koole & Vant Spijker (2000). Overcoming the planning fallacy through willpower: Effects of implementation intentions on actual and predicted task-completion timesEuropean Journal of Social Psychology, 30: 873–888.

Krueger (2000). Individual differences and Pearson's r: Rationality revealed? Behavioral and Brain Sciences, 23: 684–685.

Kruger & Evans (2004). If you don’t want to be late, enumerate: Unpacking reduces the planning fallacy. Journal of Experimental Social Psychology, 40: 586–598.

Larrick (2004). Debiasing. In Koehler & Harvey (eds.), Blackwell Handbook of Judgment and Decision Making (pp. 316-337). Wiley-Blackwell.

Larrick, Morgan, & Nisbett (1990). Teaching the use of cost-benefit reasoning in everyday life. Psychological Science, 1: 362-370.

Lohman (2000). Complex information processing and intelligence. In Sternberg (ed.), Handbook of intelligence (pp. 285–340). Cambridge University Press.

Lovallo & Kahneman (2003). Delusions of success: How optimism undermines executives' decisions. Harvard Business Review, July 2003: 56-63.

Manktelow (2004). Reasoning and rationality: The pure and the practical. In Manktelow & Chung (eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 157–177). Psychology Press.

McFadden & Leonard (1995). Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Hausman (ed.), Contingent valuation: a critical assessment. North Holland.

Milkman, Chugh, & Bazerman (2010). How can decision making be improved? Perspectives on Psychological Science 4: 379-383.

Mussweiler, Strack, & Pfeiffer (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26: 1142–50.

Oreg & Bayazit. Prone to Bias: Development of a Bias Taxonomy From an Individual Differences Perspective. Review of General Psychology, 3: 175-193.

Over (2004). Rationality and the normative/descriptive distinction. In Koehler & Harvey (eds.), Blackwell handbook of judgment and decision making (pp. 3–18). Blackwell Publishing.

Peetz, Buehler & Wilson (2010). Planning for the near and distant future: How does temporal distance affect task completion predictions? Journal of Experimental Social Psychology, 46: 709-720.

Perkins (1995). Outsmarting IQ: The emerging science of learnable intelligence. Free Press.

Pezzo, Litman, & Pezzo (2006). On the distinction between yuppies and hippes: Individual differences in prediction biases for planning future tasks. Personality and Individual Differences, 41: 1359-1371.

Reimers & Butler (1992). The effect of outcome knowledge on auditor's judgmental evaluations. Accounting, Organizations and Society, 17: 185–194.

Richerson & Boyd (2005). Not By Genes Alone: How Culture Transformed Human Evolution. University of Chicago Press.

Ross, Greene, & House (1977). The false consensus phenomenon: An attributional bias in self-perception and social perception processes. Journal of Experimental Social Psychology, 13: 279–301.

Roy, Christenfeld, & McKenzie (2005). Underestimating the duration of future events: Memory incorrectly used or memory bias? Psychological Bulletin, 131: 738-756.

Sedlmeier (1999). Improving Statistical Reasoning: Theoretical Models and Practical Implications. Erlbaum.

Shafir & LeBoeuf (2002). Rationality. Annual Review of Psychology, 53: 491–517.

Slovic (2007). If I look at the mass I will never act: Psychic numbing and genocide. Judgment and Decision Making, 2: 1–17.

Slovic, Zionts, Woods, Goodman, & Jinks (2011). Psychic numbing and mass atrocity. In E. Shafir (ed.), The behavioral foundations of policy. Sage and Princeton University Press.

Small & Loewenstein (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26: 5–16.

Small, Loewenstein, & Slovic (2007). Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102: 143–153.

Soll & Klayman (2004). Overconfidence in interval estimates. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30: 299–314.

Stanovich (1999). Who is rational? Studies of individual differences in reasoning. Erlbaum.

Stanovich (2009). What Intelligence Tests Miss: The Psychology of Rational Thought. Yale University Press.

Stanovich & West (2008). On the failure of cognitive ability to predict myside bias and one-sided thinking biases. Thinking and Reasoning, 14: 129–167.

Stanovich, Toplak, & West (2008). The development of rational thought: A taxonomy of heuristics and biases. Advances in Child Development and Behavior, 36: 251-285.

Stanovich, West, & Toplak (2010). Individual differences as essential components of heuristics and biases research. In Manktelow, Over, & Elqayam (eds.), The Science of Reason: A Festschrift for Jonathan St B.T. Evans (pp. 355-396). Psychology Press.

Stanovich, West, & Toplak (2011). Intelligence and rationality. In Sternberg & Kaufman (eds.), Cambridge Handbook of Intelligence, 3rd edition (pp. 784-826). Cambridge University Press.

Staw (1976). Knee-deep in the big muddy: a study of escalating commitment to a chosen course of action. Organizational Behavior and Human Performance, 16: 27-44.

Sternberg (1985). Beyond IQ: A triarchic theory of human intelligence. Cambridge University Press.

Sternberg (1997). Thinking Styles. Cambridge University Press.

Sternberg (2003). Wisdom, intelligence, and creativity synthesized. Cambridge University Press.

Taylor, Pham, Rivkin & Armor (1998). Harnessing the imagination: Mental simulation, self-regulation, and coping. American Psychologist, 53: 429–439.

Teger (1980). Too Much Invested to Quit. Pergamon Press.

Tversky & Kahneman (1979). Intuitive prediction: Biases and corrective procedures. TIMS Studies in Management Science, 12: 313-327.

Tversky & Kahneman (1981). The framing of decisions and the psychology of choice. Science, 211: 453–458. 

Unsworth & Engle (2005). Working memory capacity and fluid abilities: Examining the correlation between Operation Span and Raven. Intelligence, 33: 67–81.

Whyte (1986). Escalating Commitment to a Course of Action: A Reinterpretation. The Academy of Management Review, 11: 311-321.

Wu, Zhang, & Gonzalez (2004). Decision under risk. In Koehler & Harvey (eds.), Blackwell handbook of judgment and decision making (pp. 399–423). Blackwell Publishing.

New Comment
111 comments, sorted by Click to highlight new comments since: Today at 8:39 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

At first glance I thought this would be an awesome post to introduce normal people to rationality. However it quickly becomes theoretical and general, ending pretty much with "to make actual use of all this you need to invest a lot work."

So... why isn't there some kind of short article along the lines of "xyz is a cognitive bias which does this and that, here's an easy way to overcome said bias, this is your investment cost, and these are your expected returns" or something? Could be as short as half a page, maybe with a few links to short posts covering other biases, and most importantly without any math. You know, something that you could link a manager or CEO to while saying "this might interest you, it allows you to increase the quality of your economic and otherwise decisions."

Or is there?

5lukeprog13y
Yeah, this is an overview. I am planning to write something like you propose for the planning fallacy. I'd also like to see lots of those kinds of articles out there.
1Sly13y
I just want to second that I very much want this. I did not feel that an upvote was enough support.

The beginning of this post (the list of concrete, powerful, real/realistic, and avoidable cases of irrationality in action), is probably the best introduction to x-rationality I've read yet. I can easily imagine it hooking lots of potential readers that our previous attempts at introduction (our home page, the "welcome to LW" posts, etc) wouldn't.

In fact, I'd nominate some version of that text as our new home page text, perhaps just changing out the last couple sentences to something that encompasses more of LW in general (rather than cogsci specifically). I mean this as a serious actionable suggestion.

For the sake of constructive feedback though, I thought that much of the rest of the article was probably too intense (as measured in density of unfamiliar terms and detailed concepts) for newcomers. It sort of changes from "Introduction for rationality beginners" to "Brief but somewhat technical summary for experienced LWers". So it could be more effective if targeted more narrowly.

6lukeprog13y
Yeah, this post is still far more technical than would be appropriate for, say, a magazine or webzine. But at eases into technicality a bit gradually... :)
3jsalvatier13y
Suggested on the homepage talk page.
1loup-vaillant13y
I think the intensity of the second half of this article may be compensated by the sheer amount of footnotes and references. I mean, that much bibliographic work amounts to a quite compelling argument from authority¹. Someone who dozes out and scrolls fast will notice this, and may think that so much work is worth the effort to try and read. 1: Which isn't a fallacy until you know more about the subject. And I trust lukeprog with the relevance and accuracy of his bibliographic work.

Meta: I would recommend distinguishing between citation-notes and content-notes. Scrolling a long way down to find a citation is annoying and distracting, but so is the feeling that I might be missing some content if I don't scroll down to look.

Apologies if this has been brought up before.

0lukeprog13y
That would be useful. I guess the way I've seen this done sometimes is to use symbols like * † ‡ for content footnotes and numbers for citation endnotes? But that usually works on paper, where pages are short and you can see the content of the footnote at a glance. I'm not sure I've seen a solution for this that works on the web. It'd be nice to have an integrated Less Wrong footnote system so that we could test different ways of displaying the content. Maybe a hover-over-the-footnote-to-read-its-contents feature?
2beoShaffer13y
I've seen this work well elsewhere.
0gwern13y
I've been very pleased with it on gwern.net; might be a little tricky on LW because it relies on the footnotes all having a particular name which the Javascript can then blindly load a related footnote in the popup, or whatever, and LWers seem to use various tools to generate footnoted-HTML (when they do at all).
1TheDave13y
Until that sort of feature is implemented, what about footnote links to the content while having text (no link) to the references? Also helpful would be a "return to where this number is in the text" function. I anticipate this solution taking less time while being less robust. Here's an example. The body text footnote numbers link to the bottom, and a return arrow links you back to the citation. Major problem on the linked website is that the page seems to have to reload. I don't know of any way to make citations such as these without the process being time-intensive unless you write your own citation manager or contact the linked-to blogger.
2lukeprog13y
Yes, if a volunteer would like to do that for finished drafts of my posts as I complete them, that would be great.
0Sniffnoy13y
I'm not getting that. It seems to just be using anchors; why would that happen?
0TheDave13y
It might just be a browser/connection/processor speed problem on my end. Thanks for checking!
0torekp13y
You could write footnote 1, where the number 1 is a link pointing only to this comment. Hovering over the 1 shows some text. I couldn't seem to cancel the link formatting, so that might not be too useful unless you can somehow arrange that the footnotes are the first comment in your own thread. I played in the sandbox and noticed that some things work differently there than here.
0arundelo13y
This works (if entered with the HTML editor): <span title="hover text">blah</span> Unfortunately, if I remember correctly, there are gaps in browser support for it. Also IIRC, using works in more browsers, but the text will show up styled like a link unless some CSS tweaking is done.
1gwern13y
That's just a title tooltip isn't it? You can set those in Markdown easily enough (eg. [display](http://hyperlink "hover text")), and you're not allowed any sort of markup inside the tooltip, and they have severe length limitations too. So it'd be a major compromise. (I have, painfully, added them to the frontpage of gwern.net, but no one has ever commented on them or given any sign that they are useful, so I've never bothered with putting them elsewhere.)
0fupklz13y
Grantland's sidenotes are the best I've seen - http://www.grantland.com/story/_/id/6963024/video-games-killed-video-game-star

I recently found an interesting study that bears on the doctor example. Christensen and Bushyhead (1981) find that when asked to make clinical judgments, doctors usually take base rates into account quite accurately, even though when they are asked to explicitly do statistical problems involving base rates they usually get them wrong.

  1. Unpacking the components involved in a large task or project helps people to see more clearly how much time and how many resources will be required to complete it, thereby partially meliorating the planning fallacy.

The planning fallacy article seems to contradict this...

But experiment has shown that the more detailed subjects' visualization, the more optimistic (and less accurate) they become. (In saying this, EY cites the work of Buehler, 2002. [1])

Is there something from your citation (#26) that overrides the conclusions of Buehler? [2] In fact, #5 was the conclusion proposed in "Planning Fallacy," which I thought was made specifically because examining all the details was so unreliable. In other words, #5 seems to say: forget about all the details; just find similar projects that actually happened and base your timeline on them.


[1] Buehler, R., Griffin, D. and Ross, M. 2002. Inside the planning fallacy: The causes and consequences of optimistic time predictions. Pp. 250-270 in Gilovich, T., Griffin, D. and Kahneman, D. (eds.) Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge, U.K.: Cambridge University Press.

[2] For #6, you cited: Connolly & Dean (1997); Forsyth & Burt (2008); Kruger & Evans (2004).

I've been reading OB and LW for years. I'm sure I've read at least 3/4 of what was posted on OB and 1/2 of highly voted posts on LW. I wouldn't call myself a beginner, and I still really appreciated this post.

Every once in awhile it's nice to be able to step back and be reminded what it's all about, so thanks for writing this!

I point people who are new to this to the Sequences but that always feels like I'm dumping too much on them. Hopefully, this post will make a better introduction to rationality and why we need it.

1Long try4y
Excuse my ignorance, what is OB?
3Kaj_Sotala4y
Overcoming Bias; most of Eliezer's LW writing was originally posted on OB, until LW was created as a community where it would be easier for other people to write about these topics as well, and Eliezer's writing got moved here.

The autonomous mind, made of unconscious Type 1 processes. There are few individual differences in its operation.

Why do you believe this? (And what state of things is this statement asserting, in more detail?) Something that should become clear after reading Stanovich (2010)?

5lukeprog13y
Right. Here's the relevant quote from Stanovich (2010):
4Vladimir_Nesov13y
As stated, still hearsay. I understand that there are probably references somewhere, but it's still unclear even what standards of performance are considered ("syntactic processing"?). And of course, the Type I processes relevant to this post are not like those listed, which are way too specialized to serve the role of the general default decision-making.

Well, it's all hearsay. I didn't do any of the experiments. :)

But I assume you're asking me to go one step deeper. Stanovich has a more detailed discussion of this in his book Rationality and the Reflective Mind. In particular, footnote 6 on page 37 cites the following sources on individual differences (and the general lack thereof) in the autonomous mind:

Anderson (2005). Marrying intelligence and cognition: A developmental view. In Sternberg & Pretz (eds.), Cognition and intelligence (pp. 268-287). Cambridge University Press.
Kanazawa (2004). General intelligence as a domain-specific adaptation. Psychological Review, 111: 512-523.
Saffran, Aslin, & Newport (1996). Statistical learning by 8-month-old infants. Science, 274: 1926-1928.
Reber (1992). An evolutionary context for the cognitive unconscious. Philosophical Psychology, 5: 33-51.
Reber (1993). Implicit learning and tacit knowledge. Oxford University Press.
Vinter & Detable (2003). Implicit learing in children and adolescents with mental retardation. American Journal of Mental Retardation, 108: 94-107.
Zacks, Hasher, & Sanft (1982). Automatic encoding of event frequency: Further readings. Journal of Experimental Psy... (read more)

The comments on Reddit are worth reading:

Cognitive science is an oxymoron and who ever said the humanity is rational?

Also:

you know, not everything has to be reduced to effieciency and end results. humans and human society is still special even if some shut in bean counter thinks otherwise.

6Jack13y
Karma to whoever finds the best visual analog for the inferential distances implied by these comments.
5MaoShan12y
http://our-universe.ru/img/6c/6c7/Astronomers_claim_star_is_universe_s_oldest.jpg

The human brain uses something like a fifth of the oxygen the body uses. The selective pressure against general intelligence would be formidable indeed.

Fun to speculate about a different biology where cognition is not so metabolically expensive, or another where it's even dearer.

4gwern13y
Is there any comprehensive discussion of the selection pressures against intelligence? I've compiled a couple in http://www.gwern.net/Drug%20heuristics#modafinil but that's only what I've stumbled across and recognized as relevant.
1Circusfacialdisc13y
I've seen some rather detailed discussion of the specific case of the enlarged human cranium being a rather tight squeeze through the pelvis, but I don't recall any more general discussion of selective pressures acting against intelligence.

(Fixed the missing spaces problem in the notes by replacing a

tag with a

. This note is mostly to further inform other authors about a workaround/reason for this annoying bug.)

the odds that he had had the disease even given the positive test were a million to one

Should be "one to a million".

4Paul Crowley13y
Common usage puts the other one first: "The chances of anything coming from Mars are a million to one"
3komponisto13y
This is an unfortunate shortening of "a million to one against", which would be correct.
2MixedNuts12y
I thought that "a million to one" always meant "a million to one against", and you had to specify "a million to one on" when necessary.
0Paul Crowley13y
My survey of one agrees with you - I would definitely have thought of this the other way around though.
0macronencer12y
Perhaps what people have in mind when they say that are betting odds. If you bet money on an unlikely event then the odds are quoted with the high number (your reward if the event occurs) first, which seems sensible from an advertising perspective.
2WhetherMan13y
I think "one IN a million" is the more common usage in American English.
2[anonymous]12y
Technically, "one in a million" and "one to a million" differ. The latter is 1/1000001000000 smaller.
0mwengler12y
Biased response: as a native American English speaker I can assure you that "a million to one" is idiomatically correct. I suspect that the more complete non-idiomatic version would be "a million to one against" and that the "against" is implicit because the idiom is highly established as expressing a very low probability.
0lukeprog13y
Thanks!

If John's physician prescribed a burdensome treatment because of a test whose false-positive rate is 99.9999%, John needs a lawyer rather than a statistician. :)

0lukeprog13y
True, that! :)

Interesting post, but I think there is a typo : « Type 1 processes are computationally expensive » Shouldn't it be type 2 ?

Also, for the Concorde story, what I always heard (being a french citizen) is that « Yes, we know, Concorde was losing money, but it is great for the image, it's just a form of advertising. It gives a good image of France (and UK) aerospatial industry, and therefore makes it easier for Airbus to sell normal planes, or for companies like Air France to sell tickets on normal planes. » Now, how much of it is about a posterior rationalizat... (read more)

5mwengler12y
I grew up in Long Island 20 miles from JFK airport. We could see the Concorde once in a while at JFK airport and if we were very lucky we would see it landing or taking off. The amount of mindspace in the world occupied by that beautiful plane was gigantic compared to that occupied by most other planes. Whether the Concorde was still a net deficit to the UK and France would require, I think, a calculation similar to figuring the deficit or surplus to the U.S. of putting people on the moon.
0velisar11y
You might be right - as I never saw one - but the project didn't start with a plan to built a spectacular flying sculpture. So they fell first to the planning fallacy (which may not be so much a psychological cognitive bias but the very structure of possible outcomes of everything - the top of the frequency distribution is to the right of the "arrival" time), then to sunk costs which later were half acknowledged, thus making them highly suspicious of trying to resolve a cognitive dissonance (rationalization). One has to take into account the original prediction to make a probabilistic interpretation...

The word "Bias" is often associated with the word "prejudice" which has become loaded with rather negative associations. (People don't like others to think of them as "prejudiced") Especially as I am not a native english speaker until a week ago (I read LW since a month) I didn't make a distinction between bias and prejudice as in my language the 2 words translate more or less the same. Maybe the process of "debiasing" should include to associate the word "bias" with "cognitive bias : a pattern of poor judgment" which every human brain has and there is nothing to be ashamed of.

1fubarobfusco13y
Introducing "bias" in terms of estimation might be easier: Bias is a systematic error in estimation. In the case of cognitive biases (as opposed to, for instance, statistically biased samples) we're talking about cases where people reliably make certain errors in estimation or prediction, for instance in estimating how long a project will take or whether an investment of effort or money is worthwhile.

This is a good introductory "big picture" post that describes motivation for developing our craft, some mechanisms that underlie its development, and glimpses into possible future directions. It also locates its topic in the literature.

(Not sure why it's being so weakly upvoted. One reason could be that it poses itself as presenting no new material, and so people skip it, and skipping it, refrain from voting.)

With my current research together with John Vervaeke and Johannes Jaeger, I'm continuing the work on the cognitive science of rationality under uncertainty, bringing together the axiomatic approach (on which Stanovich et al. build) and the ecological approach. 

Here I talk about Rationality and Cognitive Science on the ClearerThinking Podcast. Here is a YouTube conversation between me and John, explaining our work and the "paradigm shift in rationality". Here is the preprint of the same argumentation as "Rationality and Relevance Realization". John als... (read more)

Type 1 processes provide judgments quickly, but these judgments are often wrong, and can be overridden by corrective Type 2 processes.

This might be the picking-on-small-details-and-feeling-important me, but I really think this is terribly oversimplified. It implies that Type 1 is basically your enemy, or that is what it feels like to me. Truth to be told, Type 1 is extremely handy to you as it prevents combinatorial explosion in how you experience reality. I think Type 1 is actually mostly great, because I am really happy that I just pick up a cup when I a... (read more)

When I read point 7 for your proposed tools, my immediate question was "How much more likely am I to complete a task if I conduct the simulation?" My initial answer was "something like three times more likely!" But that was type 1 thinking; I was failing to recognize that I still had a 14% chance of completing the task without the simulation. The increase is actually 193%.

I thought that was a nice little example of what you had been discussing in the first half of the article.

Note that having type 1 processes is not an error. AIs will also need to have type 1 processes. Being perfectly rational is never optimal if time and resources are an issue. Being perfectly rational would make sense only given infinite time and resources and no competition.

5Louie12y
I heard you bring this up in person a few times last weekend too. I wanted to follow up with you because I think I'm starting to understand the reason for your disagreements with others on this matter. It is not the case that you're fighting the good fight and we are all just off in crazy land on this issue. I think instead, you're conflating "rationality" with "deliberation". Naive "perfect rationality" -- being infinitely deliberate -- is of course a mistake. But that's not what Yudkowsky, Omohundro and other careful thinkers are advocating when they discuss wanting to build an AI that is rational. They mean things like having a deliberate rational process that won't knowably violate the axioms of choice or other basic sanity checks that our kludgey, broken minds don't even try to do on their own. Also, it's possible to design algorithms that are ideally rational within time and resource constraints (ie, AIXItl, Godel Machines). There isn't a false dichotomy between "quick and dirty" heuristic kludges and infinite rationality. You can use "quick and clean" methods which converge towards rationality as they compute, rather than "type 1 processes" which are unrelated to rational deliberation. Basically, I completely disagree with your first two sentences and would agree with your second two if you replaced "perfectly rational" with words closer to what you actually mean, like "infinitely deliberate".

Is the reflective, algorithmic, autonomous hierarchy specifically for "reasoning problems" as in tools for solving non-personal puzzle questions? If yes it seems dangerous to draw too many conclusions from that for every day rationality. If not, what's the evidence that there are "few continuous individual differences" concerning the autonomous mind? For example, people seem to differ a lot in how they are inclined to respond when pressured to something, some seem to need conscious effort to be able to say no, some seem to need conscious effort to avoid responding with visible anger.

Type 1 processes are computationally expensive, and thus humans are 'cognitive misers'. This means that we (1) default to Type 1 processes whenever possible

Should this be "Type 2 processes are computationally expensive?"

2lukeprog13y
D'oh! Yes. Thanks.
[-][anonymous]9y00

I guess this site would be a great source of the "mindware" discussed in this post-- but is it the only one? One would think that people on this site would propagate these ideas more thoroughly, and thus other sites with similar topics would be born. But I haven't heard of any such sites, or any other media of the like either. Weird.

0[anonymous]9y
dude why u delete my cmment
0[anonymous]9y
Also, I'd just like to mention that I'm probably smarter than all of you here. You guys are pretentious twats pretending to be smart, and that's it.
[-][anonymous]9y00

Very useful list. I wonder if there are additions since 2011?

I would add The Reversal Test for eliminating the status quo bias.

Nice article! I was wondering though whether there were any theories on why our brain works the way it does. And did we make the same mistakes, say 1000 years ago? I am new here, so I don't know what the main thoughts on this are, but I did read HPMOR which seems to suggest that this was always the case. However, it seems to me that people are becoming less and less critical, perhaps because they are too busy or because the level of education has steadily decreased over the last decennia. How could we actually prove that these fallacies aren't caused by some external factors?

0TheOtherDave11y
What would you expect to see if people have become steadily less "critical" over the last thousand years? What would you expect to see if people have become steadily more "critical" over the last thousand years? What would you expect to see if people have remained equally "critical" over the last thousand years? What would you expect to see if people's "critical"ness has varied in non-steady ways over the last thousand years?
0binbashjip11y
I don't believe that people have become steadily less or more critical, but it seems plausible that this has varied in non-steady ways, increasing or decreasing depending on the circumstances. I would expect that in this case the degree to which people make common thinking errors also varies. In fact, I suspect that the 2 4 6 test yields different results depending on the subjects' education, regardless of whether they know of positive bias.
0TheOtherDave11y
Wait, now you've confused me. Earlier, you said: ....which I assumed you meant in the context of the 1000-year period you'd previously cited. I don't know how to reconcile that with: Can you clarify that?
0binbashjip11y
I see how that might have been confusing. The 1000 years ago was simply an example to question whether people have always made the same thinking errors. The criticalness is based only on recent history, mostly on the people around me and is an attempt to argue in favor of possible external factors.
0TheOtherDave11y
Well, OK, but still: over whichever period you have in mind, does it seem to you that people are becoming less and less critical, or that they haven't become steadily less critical? Regardless, I would agree that some common thinking errors are cultural.
0binbashjip11y
Over the last 1000 years: varying criticalness Over the last 20-30 years or so: less and less critical Has there been some research on culturalness of thinking errors? I thought the claim was that these thinking errors are hardwired in the brain, hence timeless and uncultural.
0CCC11y
That would be a part of the whole Nature vs. Nurture debate, wouldn't it? I think it would be very hard to prove that any given thinking error is biologically hardwired (as opposed to culturally); and even if a bias is biologically hard-wired, an opposing cultural bias might be able to counter that. Many people are largely exposed to only a single culture; widespread, pervasive errors in that culture would, I expect, be indistinguishable from hardwired biases.

The Whyte (1986) reference links to Arkes-Blumer-The-psychology-of-sunk-cost.pdf.

I've always wanted to know how it feels for students/"experts" of cognitive science how they feel when they realize their limits w r t perceptual speed, discrimination accuracy, working memory capacity etc.

2TheOtherDave11y
As a student of the field in the late 80s, the most pervasive effect was constantly being forced to realize that "because X is actually Y" is not actually an answer to "why does X seem Y to me?" That is, not just that it's sometimes false, but that whether it's true or not has nothing whatsoever to do with the question I asked.

Minor quibble - the link to the anchor for the Stanovich Stuff doesn't work if you click on it from the front page - you could change it so that it links directly to http://lesswrong.com/lw/7e5/the_cognitive_science_of_rationality/#HumanReasoning instead of being relative to the current page, but I'm not sure if that would break something later on.

It's a nice introduction to rationality that someone could present to their friends/family, though I do still think that someone who has done no prior reading on it would find it a bit daunting. Chances are, before introducing it to someone, a person might want to make them a little more familiar with the terms used.

I wish i had this back when I was teaching gen-ed science courses in college. I tried to do something similar, but at a much smaller scale. Some random observations, that would help flesh the content out:

  1. A big reason "type 1" reasoning is so often wrong is these decision making modules evolved under very different conditions than we currently live in.

  2. I always liked Pinker's description (from "How the Mind Works") of the nature of the conscious mind by reverse-engineering it: it is a simulation of a serial process running on paralle

... (read more)
0efpresron13y
Make that 'Colbert' vs. 'Spock' :)

Definite Upvote for filling the depressingly barren niche that is Introductory Postings! On a blog as big and interconnected as this one, it's hard to know where to start introducing the idea to other people. The new front page was a good start at drawing people in, and I admire your spirit in continuing that pursuit.

I love this post, personally. It starts off very well, with a few juicy bits that prompt serious thinking in the right direction right off the bat. Only problem is, my internal model of people who do not have an explicit interest in rationalit... (read more)

Seems like "Importance of alternative hypothesis" should be under "miserliness"

What is the main evidence that deliberate reasoning is computationally expensive and where would I go to read about it (books, keywords etc.)? This seems to be a well accepted theory, but I am not familiar with the science.

2lukeprog13y
There are all kinds of studies showing that when working memory is occupied then system 2 but not system 1 processes are interrupted, and so on. The Frankish intro should at least point you to the sources you may be looking for.
1jsalvatier13y
Thanks!

Type 2 processes are computationally expensive, and thus humans are 'cognitive misers'. This means that we (1) default to Type 1 processes whenever possible, and (2) when we must use Type 2 processes, we use the least expensive kinds of Type 2 processes, those with a 'focal bias' — a disposition to reason from the simplest model available instead of considering all the relevant factors. Hence, we are subject to confirmation bias (our cognition is focused on what we already believe) and other biases.

I don't follow how this results in confirmation bias. Perhaps you could make it more explicit?

(Also, great article. This looks like a good way to introduce people to LW and LW-themed content.)

Epistemic rationality is about forming true beliefs, about getting the map in your head to accurately reflect the world out there.

Since the map describes itself as well, not just the parts of the world other than the map, and being able to reason about the relationship of the map and the world is crucial in the context of epistemic rationality, I object to including the "out there" part in the quoted sentence. The map in your head should accurately reflect the world, not just the part of the world that's "out there".

0lukeprog13y
I suppose. Fixed.
0Jack13y
I don't think "out there" is meant to exclude the map itself- its metaphorical language.
0Vladimir_Nesov13y
But it can be taken as meaning to exclude the mind. I'm clearly not arguing with Luke's intended meaning, so the intended meaning is irrelevant to this issue, only possible interpretations of the text as written.
2Jack13y
(Nods and shrugs) Is there a way to make the point both accurately and simply? The whole thing is a mess of recursive reference.
-1XiXiDu13y
A bin trying to contain itself? I generally agree with your comment, but there are limits, no system can understand itself for that the very understanding would evade itself forever.
7nshepperd13y
Ahh, don't say "understanding" when you mean "containing a simulation"! It's true that a computer capable of storing n bits can't contain within it a complete description of an arbitrary n-bit computer. But that's not fundamentally different to being unable to store of a description of the 3^^^3 ×n-bit world out there (the territory will generally always be bigger than the map); and of course you don't have to have a miniature bit-for-bit copy of the territory in your head to have a useful understanding of it, and the same goes for self-knowledge. (Of course, regardless of all that, we have quines anyway, but they've already been mentioned.)
0XiXiDu13y
Could you elaborate on the difference between "understanding" and "simulating", how are you going to get around logical uncertainty?
4nshepperd13y
The naive concept of understanding includes everything we've already learned from cognitive psychology, and other sciences of the brain. Knowing, for example, that the brain runs on neurons with certain activation functions is useful even if you don't know the specific activation states of all the neurons in your brain, as is a high-level algorithmic description of how our thought processes work. This counts as part of the map that reflects the world "inside our heads", and it is certainly worth refining. In the context of a computer program or AI such "understanding" would include the AI inspecting its own hardware and its own source code, whether by reading it from the disk or esoteric quining tricks. An intelligent AI could make useful inferences from the content of the code itself -- without having to actually run it, which is what would constitute "simulation" and run into all the paradoxes with having not enough memory to contain a running version of itself. "Understanding" is then usually partial, but still very useful. "Simulating" is precise and essentially complete, but usually computationally intractable (and occasionally impossible) so we rarely try to do that. You can't get around logical certainty, but that just means you'll sometimes have to live with incomplete knowledge, and it's not as if we weren't resigned to that anyway.
0mwengler12y
The "map" is NEVER comlete. So our map of the map is an incomplete map of the map. In engineering terms, the remarkable feature of the human mind's gigantically oversimplified map of the world both in it and around it is that it is as effective as it is in changing the world. On the other hand, since we are not exposed to anything with a better map than ours, it is difficult to know what we are missing. Must be a cognitive bias or three in there as well.
-2Vladimir_Nesov13y
More accurately, the map should worry about mapping its future states, to plan the ways of setting them up to reflect the world, and have them mapped when they arrive, so that knowledge of them can be used to update the map (of the world further in the future, including the map, and more generally of relevant abstract facts). (More trivially, there are quines), programs that know full own source code, as you likely know.)
9Kaj_Sotala13y
I don't think being able to quine yourself really has anything to do with fully understanding yourself. I could get a complete printout of the exact state of every neuron in my brain, that wouldn't give me full understanding of myself. To do something useful with the data, I'd need to perform an analysis of it at a higher level of abstraction. A quine provides the raw source code that can be analyzed, but it does no analysis by itself. "The effects of untried mutations on fifteen million interacting shapers could rarely be predicted in advance; in most cases, the only reliable method would have been to perform every computation that the altered seed itself would have performed... which was no different from going ahead and growing the seed, creating the mind, predicting nothing." (Greg Egan). In any case, if we are talking about the brains/minds of baseline unmodified humans, as we should be in an introductory article aimed at folks outside the LW community, then XiXiDu's point is definitely valid. Ordinary humans can't quine themselves, even if a quine could be construed as "understanding".
-3Vladimir_Nesov13y
His point is not valid, because it doesn't distinguish the difficulty of self-understanding from that of understanding the world-out-there, as nshepperd points out. (There was also this unrelated issue where he didn't seem to understand what quines can do.) A human self-model would talk about abstract beliefs first, not necessarily connecting them to the state of the brain in any way.
0XiXiDu13y
I don't? Can you elaborate on what exactly I don't understand. Also, "self" is a really vague term.
1XiXiDu13y
Something has to interpret the source code (e.g. "print"). The map never equals the territory completely. At some point you'll get stuck, any self-replication is based on outside factors, e.g. the low entropy at the beginning of the universe. main = putStrLn $ (\x -> x ++ show x)"main = putStrLn $ (\\x -> x ++ show x)" The quine does not include the code for the function "putStrLn".
0Vladimir_Nesov13y
It could. You can pack a whole Linux distribution in there, it won't cause any problems in principle (if we ignore the technical issue of having to deal with a multi-gigabyte source file)..
0XiXiDu13y
I don't get your point, I know that you can do that.
2Vladimir_Nesov13y
Okay. Unpack the reason for saying that
2XiXiDu13y
This is what I meant. I probably shouldn't talk about concepts of which I know almost nothing, especially if I don't even know the agreed upon terminology to refer to it. The reason for why I replied to your initial comment about the "outside world" was that I felt reasonable sure that the concept that is called a quine does not enable any agent to feature a map that equals the territory, not even in principle. And that's what made me believe that there is an "outside world" to any agent that is embedded in a larger world. As far as I know a Quine can be seen as an artifact of a given language rather than a complete and consistent self-reference. Every Quine is missing some of its own definition, e.g. "when preceded by" or "print" need external interpreters to work as intended. You wrote that you can pack a whole Linux distribution "in there". I don't see how that gets around the problem though, maybe you can elaborate on it. Even if the definitions of all functions were included in the definition of the quine, only the mechanical computation, the features of the actual data processing done at the hardware level "enable" the Linux kernel. You could in theory extent your quine until you have a self-replicating Turing machine, but in the end you will either have to resolve to mathematical Platonism or run into problems like the low-entropy beginning of the universe. For example, once your map of the territory became an equal copy of the territory it would miss the fact that the territory is made up of itself and a perfect copy. And once you would incorporate that fact into your map then your map would be missing the fact that the difference between itself and the territory is the knowledge that there is a copy of it. I don't see how you can sidestep this problem, even if you accept mathematical Platonism. Unless there is a way to sidestep that problem there is always an "outside world" (I think that for all practical purposes this is true anyway). Let's say ther
0Vladimir_Nesov13y
Does the territory "know" the territory?
4Steve_Rayhawk13y
Related: Ken Thompson's "Reflections on Trusting Trust" (his site, Wikipedia#Reflections_on_Trusting_Trust)), Richard Kennaway's comment on "The Finale of the Ultimate Meta Mega Crossover".
-5mwengler12y
-1twanvl13y
Every program runs on some kind of machine, be it an intel processor, an abstract model of a programming language or the laws of the universe. A program can know its own source code in terms it can execute, i.e. commands that are understood by the interpreter. But I am not sure what point you are trying to make exactly in the above comment.
1XiXiDu13y
Vladimir Nesov was criticizing lukeprog's phrase "world out there" claiming that the "map in your head should accurately reflect the world, not just the part of the world that's "out there"". I agree, but if you are accurate then you have to admit that it isn't completely possible to do so.
[-][anonymous]10y-40

Ok. Simple question: what has cognitive science done for the world that has made it more rational than any other given field of psychology has to made people more rational?

For anyone curious, I am not a cognitivist.

""

Penrose uses Gödel’s incompleteness theorem (which states that there are mathematical truths which can never be proven in a sufficiently strong mathematical system; any sufficiently strong system of axioms will also be incomplete) and Turing’s halting problem (which states that there are some things which are inherently non-computab
... (read more)