At first glance I thought this would be an awesome post to introduce normal people to rationality. However it quickly becomes theoretical and general, ending pretty much with "to make actual use of all this you need to invest a lot work."
So... why isn't there some kind of short article along the lines of "xyz is a cognitive bias which does this and that, here's an easy way to overcome said bias, this is your investment cost, and these are your expected returns" or something? Could be as short as half a page, maybe with a few links to short posts covering other biases, and most importantly without any math. You know, something that you could link a manager or CEO to while saying "this might interest you, it allows you to increase the quality of your economic and otherwise decisions."
Or is there?
The beginning of this post (the list of concrete, powerful, real/realistic, and avoidable cases of irrationality in action), is probably the best introduction to x-rationality I've read yet. I can easily imagine it hooking lots of potential readers that our previous attempts at introduction (our home page, the "welcome to LW" posts, etc) wouldn't.
In fact, I'd nominate some version of that text as our new home page text, perhaps just changing out the last couple sentences to something that encompasses more of LW in general (rather than cogsci specifically). I mean this as a serious actionable suggestion.
For the sake of constructive feedback though, I thought that much of the rest of the article was probably too intense (as measured in density of unfamiliar terms and detailed concepts) for newcomers. It sort of changes from "Introduction for rationality beginners" to "Brief but somewhat technical summary for experienced LWers". So it could be more effective if targeted more narrowly.
Meta: I would recommend distinguishing between citation-notes and content-notes. Scrolling a long way down to find a citation is annoying and distracting, but so is the feeling that I might be missing some content if I don't scroll down to look.
Apologies if this has been brought up before.
I recently found an interesting study that bears on the doctor example. Christensen and Bushyhead (1981) find that when asked to make clinical judgments, doctors usually take base rates into account quite accurately, even though when they are asked to explicitly do statistical problems involving base rates they usually get them wrong.
- Unpacking the components involved in a large task or project helps people to see more clearly how much time and how many resources will be required to complete it, thereby partially meliorating the planning fallacy.
The planning fallacy article seems to contradict this...
But experiment has shown that the more detailed subjects' visualization, the more optimistic (and less accurate) they become. (In saying this, EY cites the work of Buehler, 2002. [1])
Is there something from your citation (#26) that overrides the conclusions of Buehler? [2] In fact, #5 was the conclusion proposed in "Planning Fallacy," which I thought was made specifically because examining all the details was so unreliable. In other words, #5 seems to say: forget about all the details; just find similar projects that actually happened and base your timeline on them.
[1] Buehler, R., Griffin, D. and Ross, M. 2002. Inside the planning fallacy: The causes and consequences of optimistic time predictions. Pp. 250-270 in Gilovich, T., Griffin, D. and Kahneman, D. (eds.) Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge, U.K.: Cambridge University Press.
[2] For #6, you cited: Connolly & Dean (1997); Forsyth & Burt (2008); Kruger & Evans (2004).
I've been reading OB and LW for years. I'm sure I've read at least 3/4 of what was posted on OB and 1/2 of highly voted posts on LW. I wouldn't call myself a beginner, and I still really appreciated this post.
Every once in awhile it's nice to be able to step back and be reminded what it's all about, so thanks for writing this!
I point people who are new to this to the Sequences but that always feels like I'm dumping too much on them. Hopefully, this post will make a better introduction to rationality and why we need it.
The autonomous mind, made of unconscious Type 1 processes. There are few individual differences in its operation.
Why do you believe this? (And what state of things is this statement asserting, in more detail?) Something that should become clear after reading Stanovich (2010)?
Well, it's all hearsay. I didn't do any of the experiments. :)
But I assume you're asking me to go one step deeper. Stanovich has a more detailed discussion of this in his book Rationality and the Reflective Mind. In particular, footnote 6 on page 37 cites the following sources on individual differences (and the general lack thereof) in the autonomous mind:
Anderson (2005). Marrying intelligence and cognition: A developmental view. In Sternberg & Pretz (eds.), Cognition and intelligence (pp. 268-287). Cambridge University Press.
Kanazawa (2004). General intelligence as a domain-specific adaptation. Psychological Review, 111: 512-523.
Saffran, Aslin, & Newport (1996). Statistical learning by 8-month-old infants. Science, 274: 1926-1928.
Reber (1992). An evolutionary context for the cognitive unconscious. Philosophical Psychology, 5: 33-51.
Reber (1993). Implicit learning and tacit knowledge. Oxford University Press.
Vinter & Detable (2003). Implicit learing in children and adolescents with mental retardation. American Journal of Mental Retardation, 108: 94-107.
Zacks, Hasher, & Sanft (1982). Automatic encoding of event frequency: Further readings. Journal of Experimental Psy...
The comments on Reddit are worth reading:
Cognitive science is an oxymoron and who ever said the humanity is rational?
Also:
you know, not everything has to be reduced to effieciency and end results. humans and human society is still special even if some shut in bean counter thinks otherwise.
The human brain uses something like a fifth of the oxygen the body uses. The selective pressure against general intelligence would be formidable indeed.
Fun to speculate about a different biology where cognition is not so metabolically expensive, or another where it's even dearer.
(Fixed the missing spaces problem in the notes by replacing a
. This note is mostly to further inform other authors about a workaround/reason for this annoying bug.)
the odds that he had had the disease even given the positive test were a million to one
Should be "one to a million".
If John's physician prescribed a burdensome treatment because of a test whose false-positive rate is 99.9999%, John needs a lawyer rather than a statistician. :)
Interesting post, but I think there is a typo : « Type 1 processes are computationally expensive » Shouldn't it be type 2 ?
Also, for the Concorde story, what I always heard (being a french citizen) is that « Yes, we know, Concorde was losing money, but it is great for the image, it's just a form of advertising. It gives a good image of France (and UK) aerospatial industry, and therefore makes it easier for Airbus to sell normal planes, or for companies like Air France to sell tickets on normal planes. » Now, how much of it is about a posterior rationalizat...
The word "Bias" is often associated with the word "prejudice" which has become loaded with rather negative associations. (People don't like others to think of them as "prejudiced") Especially as I am not a native english speaker until a week ago (I read LW since a month) I didn't make a distinction between bias and prejudice as in my language the 2 words translate more or less the same. Maybe the process of "debiasing" should include to associate the word "bias" with "cognitive bias : a pattern of poor judgment" which every human brain has and there is nothing to be ashamed of.
This is a good introductory "big picture" post that describes motivation for developing our craft, some mechanisms that underlie its development, and glimpses into possible future directions. It also locates its topic in the literature.
(Not sure why it's being so weakly upvoted. One reason could be that it poses itself as presenting no new material, and so people skip it, and skipping it, refrain from voting.)
With my current research together with John Vervaeke and Johannes Jaeger, I'm continuing the work on the cognitive science of rationality under uncertainty, bringing together the axiomatic approach (on which Stanovich et al. build) and the ecological approach.
Here I talk about Rationality and Cognitive Science on the ClearerThinking Podcast. Here is a YouTube conversation between me and John, explaining our work and the "paradigm shift in rationality". Here is the preprint of the same argumentation as "Rationality and Relevance Realization". John als...
Type 1 processes provide judgments quickly, but these judgments are often wrong, and can be overridden by corrective Type 2 processes.
This might be the picking-on-small-details-and-feeling-important me, but I really think this is terribly oversimplified. It implies that Type 1 is basically your enemy, or that is what it feels like to me. Truth to be told, Type 1 is extremely handy to you as it prevents combinatorial explosion in how you experience reality. I think Type 1 is actually mostly great, because I am really happy that I just pick up a cup when I a...
When I read point 7 for your proposed tools, my immediate question was "How much more likely am I to complete a task if I conduct the simulation?" My initial answer was "something like three times more likely!" But that was type 1 thinking; I was failing to recognize that I still had a 14% chance of completing the task without the simulation. The increase is actually 193%.
I thought that was a nice little example of what you had been discussing in the first half of the article.
Note that having type 1 processes is not an error. AIs will also need to have type 1 processes. Being perfectly rational is never optimal if time and resources are an issue. Being perfectly rational would make sense only given infinite time and resources and no competition.
Is the reflective, algorithmic, autonomous hierarchy specifically for "reasoning problems" as in tools for solving non-personal puzzle questions? If yes it seems dangerous to draw too many conclusions from that for every day rationality. If not, what's the evidence that there are "few continuous individual differences" concerning the autonomous mind? For example, people seem to differ a lot in how they are inclined to respond when pressured to something, some seem to need conscious effort to be able to say no, some seem to need conscious effort to avoid responding with visible anger.
Type 1 processes are computationally expensive, and thus humans are 'cognitive misers'. This means that we (1) default to Type 1 processes whenever possible
Should this be "Type 2 processes are computationally expensive?"
I guess this site would be a great source of the "mindware" discussed in this post-- but is it the only one? One would think that people on this site would propagate these ideas more thoroughly, and thus other sites with similar topics would be born. But I haven't heard of any such sites, or any other media of the like either. Weird.
Very useful list. I wonder if there are additions since 2011?
I would add The Reversal Test for eliminating the status quo bias.
Nice article! I was wondering though whether there were any theories on why our brain works the way it does. And did we make the same mistakes, say 1000 years ago? I am new here, so I don't know what the main thoughts on this are, but I did read HPMOR which seems to suggest that this was always the case. However, it seems to me that people are becoming less and less critical, perhaps because they are too busy or because the level of education has steadily decreased over the last decennia. How could we actually prove that these fallacies aren't caused by some external factors?
I've always wanted to know how it feels for students/"experts" of cognitive science how they feel when they realize their limits w r t perceptual speed, discrimination accuracy, working memory capacity etc.
Minor quibble - the link to the anchor for the Stanovich Stuff doesn't work if you click on it from the front page - you could change it so that it links directly to http://lesswrong.com/lw/7e5/the_cognitive_science_of_rationality/#HumanReasoning instead of being relative to the current page, but I'm not sure if that would break something later on.
It's a nice introduction to rationality that someone could present to their friends/family, though I do still think that someone who has done no prior reading on it would find it a bit daunting. Chances are, before introducing it to someone, a person might want to make them a little more familiar with the terms used.
I wish i had this back when I was teaching gen-ed science courses in college. I tried to do something similar, but at a much smaller scale. Some random observations, that would help flesh the content out:
A big reason "type 1" reasoning is so often wrong is these decision making modules evolved under very different conditions than we currently live in.
I always liked Pinker's description (from "How the Mind Works") of the nature of the conscious mind by reverse-engineering it: it is a simulation of a serial process running on paralle
Definite Upvote for filling the depressingly barren niche that is Introductory Postings! On a blog as big and interconnected as this one, it's hard to know where to start introducing the idea to other people. The new front page was a good start at drawing people in, and I admire your spirit in continuing that pursuit.
I love this post, personally. It starts off very well, with a few juicy bits that prompt serious thinking in the right direction right off the bat. Only problem is, my internal model of people who do not have an explicit interest in rationalit...
What is the main evidence that deliberate reasoning is computationally expensive and where would I go to read about it (books, keywords etc.)? This seems to be a well accepted theory, but I am not familiar with the science.
Type 2 processes are computationally expensive, and thus humans are 'cognitive misers'. This means that we (1) default to Type 1 processes whenever possible, and (2) when we must use Type 2 processes, we use the least expensive kinds of Type 2 processes, those with a 'focal bias' — a disposition to reason from the simplest model available instead of considering all the relevant factors. Hence, we are subject to confirmation bias (our cognition is focused on what we already believe) and other biases.
I don't follow how this results in confirmation bias. Perhaps you could make it more explicit?
(Also, great article. This looks like a good way to introduce people to LW and LW-themed content.)
Epistemic rationality is about forming true beliefs, about getting the map in your head to accurately reflect the world out there.
Since the map describes itself as well, not just the parts of the world other than the map, and being able to reason about the relationship of the map and the world is crucial in the context of epistemic rationality, I object to including the "out there" part in the quoted sentence. The map in your head should accurately reflect the world, not just the part of the world that's "out there".
Ok. Simple question: what has cognitive science done for the world that has made it more rational than any other given field of psychology has to made people more rational?
For anyone curious, I am not a cognitivist.
""
Penrose uses Gödel’s incompleteness theorem (which states that there are mathematical truths which can never be proven in a sufficiently strong mathematical system; any sufficiently strong system of axioms will also be incomplete) and Turing’s halting problem (which states that there are some things which are inherently non-computab
...
(The post is written for beginners. Send the link to your friends! Regular Less Wrong readers may want to jump to the Stanovich material.)
The last 40 years of cognitive science have taught us a great deal about how our brains produce errors in thinking and decision making, and about how we can overcome those errors. These methods can help us form more accurate beliefs and make better decisions.
Long before the first Concorde supersonic jet was completed, the British and French governments developing it realized it would lose money. But they continued to develop the jet when they should have cut their losses, because they felt they had "invested too much to quit"1 (sunk cost fallacy2).
John tested positive for an extremely rare but fatal disease, using a test that is accurate 80% of the time. John didn't have health insurance, and the only available treatment — which his doctor recommended — was very expensive. John agreed to the treatment, his retirement fund was drained to nothing, and during the treatment it was discovered that John did not have the rare disease after all. Later, a statistician explained to John that because the disease is so rare, the chance that he had had the disease even given the positive test was less than one in a million. But neither John's brain nor his doctor's brain had computed this correctly (base rate neglect).
Mary gave money to a charity to save lives in the developing world. Unfortunately, she gave to a charity that saves lives at a cost of $100,000 per life instead of one that saves lives at 1/10th that cost, because the less efficient charity used a vivid picture of a starving child on its advertising, and our brains respond more to single, identifiable victims than to large numbers of victims (identifiability effect3 and scope insensitivity4).
During the last four decades, cognitive scientists have discovered a long list of common thinking errors like these. These errors lead us to false beliefs and poor decisions.
How are these errors produced, and how can we overcome them? Vague advice like "be skeptical" and "think critically" may not help much. Luckily, cognitive scientists know a great deal about the mathematics of correct thinking, how thinking errors are produced, and how we can overcome these errors in order to live more fulfilling lives.
Rationality
First, what is rationality? It is not the same thing as intelligence, because even those with high intelligence fall prey to some thinking errors as often as everyone else.5 But then, what is rationality?
Cognitive scientists recognize two kinds of rationality:
In short, rationality improves our choices concerning what to believe and what to do.
Unfortunately, human irrationality is quite common, as shown in popular books like Predictably Irrational: The Hidden Forces that Shape Our Decisions and Kluge: The Haphazard Evolution of the Human Mind.
Ever since Aristotle spoke of humans as the "rational animal," we've had a picture of ourselves as rational beings that are hampered by shortcomings like anger and fear and confirmation bias.
Cognitive science says just the opposite. Cognitive science shows us that humans just are a collection of messy little modules like anger and fear and the modules that produce confirmation bias. We have a few modules for processing logic and probability and rational goal-pursuit, but they are slow and energy-expensive and rarely used.
As we'll see, our brains avoid using these expensive modules whenever possible. Pete Richerson and Robert Boyd explain:
Or, as philosopher David Hull put it:
Human reasoning
So how does human reasoning work, and why does it so often produce mistaken judgments and decisions?
Today, cognitive scientists talk about two kinds of processes, what Daniel Kahneman (2011) calls "fast and slow" processes:
Type 1 processes provide judgments quickly, but these judgments are often wrong, and can be overridden by corrective Type 2 processes.
Type 2 processes are computationally expensive, and thus humans are 'cognitive misers'. This means that we (1) default to Type 1 processes whenever possible, and (2) when we must use Type 2 processes, we use the least expensive kinds of Type 2 processes, those with a 'focal bias' — a disposition to reason from the simplest model available instead of considering all the relevant factors. Hence, we are subject to confirmation bias (our cognition is focused on what we already believe) and other biases.
So, cognitive miserliness can cause three types of thinking errors:
But the problem gets worse. If someone is going to override Type 1 processes with Type 2 processes, then she also needs the right content available with which to do the overriding. For example, she may need to override a biased intuitive judgment with a correct application of probability theory, or a correct application of deductive logic. Such tools are called 'mindware'.9
Thus, thinking can also go wrong if there is a 'mindware gap' — that is, if an agent lacks crucial mindware like probability theory.
Finally, thinking can go wrong due to 'contaminated mindware' — mindware that exists but is wrong. For example, an agent may have the naive belief that they know their own minds quite well, which is false. Such mistaken mindware can lead to mistaken judgments.
Types of errors
Given this understanding, a taxonomy of thinking errors could begin like this:10
The circles on the left capture the three normal sources of thinking errors. The three rectangles to the right of 'Cognitive Miserliness' capture the three categories of error that can be caused by cognitive miserliness. The rounded rectangles to the right of 'Mindware Gap' and 'Corrupted Mindware' propose some examples of (1) mindware that, if missing, can cause a mindware gap, and (2) common contaminated mindware.
The process for solving a reasoning task, then, may look something like this:11
First, do I have mindware available to solve the reasoning problem before me with slow, deliberate, Type 2 processes? If not, my brain must use fast but inaccurate Type 1 processes to solve the problem. If I do have mindware available to solve this problem, do I notice the need to engage it? If not, my brain defaults to the cheaper Type 1 processes. If I do notice the need to engage Type 2 processes and have the necessary mindware, is sustained (as opposed to momentary) 'Type 2 override' required to solve the problem? If not, then I use that mindware to solve the problem. If sustained override is required to solve the reasoning problem and I don't have the cognitive capacity (e.g. working memory) needed to complete the override, then my brain will default back to Type 1 processes. Otherwise, I'll use my cognitive capacities to sustain Type 2 override well enough to complete the reasoning task with my Type 2 processes (mindware).
That may be something like how our brains determine how to solve a reasoning task.
It's this model that Stanovich and colleagues (2010) use to explain why, among other things, IQ is correlated with performance on some reasoning tasks but not others. For example, IQ correlates with performance on tests of outcome bias and hindsight bias, but not with performance on tests of anchoring effects and omission bias. To overcome these latter biases, subjects seem to need not just high cognitive capacity (fluid intelligence, working memory, etc.), but also specific rationality training.
If this is right, then we may talk of three different 'minds' at work in solving reasoning problems:
Rationality Skills
But it is not enough to understand how the human brain produces thinking errors. We also must find ways to meliorate the problem if we want to have more accurate beliefs and more efficiently achieve our goals. As Milkman et al. (2010) say:
Stanovich (2009) sums up our project:
This is the project of 'debiasing' ourselves14 with 'ameliorative psychology'.15
What we want is a Rationality Toolkit: a set of skills and techniques that can be used to overcome and correct the errors of our primate brains so we can form more accurate beliefs and make better decisions.
Our goal is not unlike Carl Sagan's 'Baloney Detection Kit', but the tools in our Rationality Toolkit will be more specific and better grounded in the cognitive science of rationality.
I mentioned some examples of debiasing interventions that have been tested by experimental psychologists in my post Is Rationality Teachable? I'll start with those, then add a few techniques for ameliorating the planning fallacy, and we've got the beginnings of our Rationality Toolkit:
But this is only the start. We need more rationality skills, and we need step-by-step instructions for how to teach them and how to implement them at the 5-second level.
Notes
1 Teger (1980).
2 A sunk cost is a cost from the past that cannot be recovered. Because decision makers should consider only the future costs and benefits of the choices before them, sunk costs should be irrelevant to human decisions. Alas, sunk costs regularly do effect human decisions: Knox & Inkster (1968); Arkes & Blumer (1985); Arkes & Ayton (1999); Arkes & Hutzel (2000); Staw (1976); Whyte (1986).
3 People are more generous (say, in giving charity) toward a single identifiable victim than toward unidentifiable or statistical victims (Kogut & Ritov 2005a, 2010; Jenni & Loewenstein 1997; Small & Loewenstein 2003; Small et al. 2007; Slovic 2007), even though they say they prefer to give to a group of people (Kogut & Ritov 2005b).
4 Yudkowsky summarizes scope insensitivity:
See also: Kahneman (1986); McFadden & Leonard (1995); Carson & Mitchell (1995); Fetherstonhaugh et al. (1997); Slovic et al. (2011).
5 Stanovich & West (2008); Ross et al. (1977); Krueger (2000).
6 Stanovich et al. (2008) write:
Also see the discussion in Stanovich et al. (2011). On instrumental rationality as the maximization of expected utility, see Dawes (1998); Hastie & Dawes (2009); Wu et al. (2004). On epistemic reality, see Foley (1987); Harman (1995); Manktelow (2004); Over (2004).
How can we measure an individual's divergence from expected utility maximization if we can't yet measure utility directly? One of the triumphs of decision science is the demonstration that agents whose behavior respects the so-called 'axioms of choice' will behave as if they are maximizing expected utility. It can be difficult to measure utility, but it is easier to measure whether one of the axioms of choice are being violated, and thus whether an agent is behaving instrumentally irrationally.
Violations of both instrumental and epistemic rationality have been catalogued at length by cognitive psychologists in the 'heuristics and biases' literature: Baron (2007); Evans (1989, 2007); Gilovich et al. (2002); Kahneman & Tversky (2000); Shafir & LeBoeuf (2002); Stanovich (1999). For the argument against comparing human reasoning practice with normative reasoning models, see Elqayam & Evans (2011).
7 Boyd & Richerson (2005), p. 135.
8 Hull (2000), p. 37.
9 Perkins (1995).
10 Adapted from Stanovich et al. (2008).
11 Adapted from Stanovich et al. (2010).
12 Ackerman et al. (1999); Deary (2000, 2001); Hunt (1987, 1999); Kane & Engle (2002); Lohman (2000); Sternberg (1985, 1997, 2003); Unsworth & Engle (2005).
13 See table 17.1 in Stanovich et al. (2010). The image is from Stanovich (2010).
14 Larrick (2004).
15 Bishop & Trout (2004).
16 Koehler (1994).
17 Koriat et al. (1980). Also see Soll & Klayman (2004); Mussweiler et al. (2000).
18 Larrick et al. (1990).
19 Gigerenzer & Hoffrage (1995).
20 Sedlmeier (1999).
21 Cheng & Wu (2010).
22 Hasher et al. (1981); Reimers & Butler (1992).
23 Clarkson et al. (2002).
24 Block & Harper (1991); George et al. (2000).
25 Lovallo & Kahneman (2003); Buehler et al. (2010); Flyvbjerg (2008); Flyvbjerg et al. (2009).
26 Connolly & Dean (1997); Forsyth & Burt (2008); Kruger & Evans (2004).
27 Taylor et al. (1988). See also Koole & Vant Spijker (2000).
References
Ackerman, Kyllonen & Richards, eds. (1999). Learning and individual differences: Process, trait, and content determinants. American Psychological Association.
Arkes & Blumer (1985). The psychology of sunk cost. Organizational Behavior and Human Decision Processes, 35: 124-140.
Arkes & Ayton (1999). The sunk cost and Concorde effects: Are humans less rational than lower animals? Psychological Bulletin, 125: 591-600.
Arkes & Hutzel (2000). The role of probability of success estimates in the sunk cost effect. Journal of Behavioral Decision Making, 13: 295-306.
Baron (2007). Thinking and Deciding, 4th edition. Cambridge University Press.
Bishop & Trout (2004). Epistemology and the Psychology of Human Judgment. Oxford University Press.
Block & Harper (1991). Overconfidence in estimation: testing the anchoring-and-adjustment hypothesis. Organizational Behavior and Human Decision Processes, 49: 188–207.
Buehler, Griffin, & Ross (1994). Exploring the 'planning fallacy': Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67: 366-381.
Buehler, Griffin, & Ross (1995). It's about time: Optimistic predictions in work and love. European Review of Social Psychology, 6: 1-32.
Buehler, Griffin, & Ross (2002). Inside the planning fallacy: The causes and consequences of optimistic time predictions. In Gilovich, Griffin, & Kahneman (eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 250-270). Cambridge University Press.
Buehler, Griffin, & Peetz (2010). The planning fallacy: cognitive, motivational, and social origins. Advances in Experimental Social Psychology, 43: 1-62.
Carson & Mitchell (1995). Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28: 155-73.
Cheng & Wu (2010). Debiasing the framing effect: The effect of warning and involvement. Decision Support Systems, 49: 328-334.
Clarkson, Emby, & Watt (2002). Debiasing the effect of outcome knowledge: the role of instructions in an audit litigation setting. Auditing: A Journal of Practice and Theory, 21: 1–14.
Connolly & Dean (1997). Decomposed versus holistic estimates of effort required for software writing tasks. Management Science, 43: 1029–1045.
Dawes (1998). Behavioral decision making and judgment. In Gilbert, Fiske, & Lindzey (eds.), The handbook of social psychology (Vol. 1, pp. 497–548). McGraw-Hill.
Deary (2000). Looking down on human intelligence: From psychometrics to the brain. Oxford University Press.
Deary (2001). Intelligence: A very short introduction. Oxford University Press.
Desvousges, Johnson, Dunford, Boyle, Hudson, & Wilson (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.
Elqayam & Evans (2011). Subtracting 'ought' from 'is': Descriptivism versus normativism in the study of human thinking. Brain and Behavioral Sciences.
Evans (1989). Bias in Human Reasoning: Causes and Consequences. Lawrence Erlbaum Associates.
Evans (2007). Hypothetical Thinking: Dual Processes in Reasoning and Judgment. Psychology Press.
Fetherstonhaugh, Slovic, Johnson, & Friedrich (1997). Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.
Flyvbjerg (2008). Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice. European Planning Studies, 16: 3–21.
Flyvbjerg, Garbuio, & Lovallo (2009). Delusion and deception in large infrastructure projects: Two models for explaining and preventing executive disaster. California Management Review, 51: 170–193.
Foley (1987). The Theory of Epistemic Rationality. Harvard University Press.
Forsyth & Burt (2008). Allocating time to future tasks: The effect of task segmentation on planning fallacy bias. Memory and Cognition, 36: 791–798.
George, Duffy, & Ahuja (2000). Countering the anchoring and adjustment bias with decision support systems. Decision Support Systems, 29: 195–206.
Gigerenzer & Hoffrage (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102: 684–704.
Gilovich, Griffin, & Kahneman (eds.) (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press.
Harman (1995). Rationality. In Smith & Osherson (eds.), Thinking (Vol. 3, pp. 175–211). MIT Press.
Hasher, Attig, & Alba (1981). I knew it all along: or did I? Journal of Verbal and Learning Behavior, 20: 86-96.
Hastie & Dawes (2009). Rational Choice in an Uncertain World, 2nd edition. Sage.
Hull (2000). Science and selection: Essays on biological evolution and the philosophy of science. Cambridge University Press.
Hunt (1987). The next word on verbal ability. In Vernon (ed.), Speed of information-processing and intelligence (pp. 347–392). Ablex.
Hunt (1999). Intelligence and human resources: Past, present, and future. In Ackerman & Kyllonen (Eds.), The future of learning and individual differences research: Processes, traits, and content (pp. 3-30) American Psychological Association.
Jenni & Loewenstein (1997). Explaining the 'identifiable victim effect.' Journal of Risk and Uncertainty, 14: 235–257.
Kahneman (1986). Comments on the contingent valuation method. In Cummings, Brookshie, & Schulze (eds.), Valuing environmental goods: a state of the arts assessment of the contingent valuation method. Roweman and Allanheld.
Kahneman & Tversky (2000). Choices, Values, and Frames. Cambridge University Press.
Kahneman (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Kane & Engle (2002). The role of prefrontal cortex working-memory capacity, executive attention, and general fluid intelligence: An individual differences perspective. Psychonomic Bulletin and Review, 9: 637–671.
Knox & Inkster (1968). Postdecision dissonance at post time. Journal of Personality and Social Psychology, 8: 319-323.
Koehler (1994). Hypothesis generation and confidence in judgment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20: 461-469.
Kogut & Ritov (2005a). The 'identified victim effect': An identified group, or just a single individual? Journal of Behavioral Decision Making, 18: 157–167.
Kogut & Ritov (2005b). The singularity effect of identified victims in separate and joint evaluations. Organizational Behavior and Human Decision Processes, 97: 106–116.
Kogut & Ritov (2010). The identifiable victim effect: Causes and boundary conditions. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 133-146). Psychology Press.
Koriat, Lichtenstein, & Fischhoff (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6: 107-118.
Koole & Vant Spijker (2000). Overcoming the planning fallacy through willpower: Effects of implementation intentions on actual and predicted task-completion times. European Journal of Social Psychology, 30: 873–888.
Krueger (2000). Individual differences and Pearson's r: Rationality revealed? Behavioral and Brain Sciences, 23: 684–685.
Kruger & Evans (2004). If you don’t want to be late, enumerate: Unpacking reduces the planning fallacy. Journal of Experimental Social Psychology, 40: 586–598.
Larrick (2004). Debiasing. In Koehler & Harvey (eds.), Blackwell Handbook of Judgment and Decision Making (pp. 316-337). Wiley-Blackwell.
Larrick, Morgan, & Nisbett (1990). Teaching the use of cost-benefit reasoning in everyday life. Psychological Science, 1: 362-370.
Lohman (2000). Complex information processing and intelligence. In Sternberg (ed.), Handbook of intelligence (pp. 285–340). Cambridge University Press.
Lovallo & Kahneman (2003). Delusions of success: How optimism undermines executives' decisions. Harvard Business Review, July 2003: 56-63.
Manktelow (2004). Reasoning and rationality: The pure and the practical. In Manktelow & Chung (eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 157–177). Psychology Press.
McFadden & Leonard (1995). Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Hausman (ed.), Contingent valuation: a critical assessment. North Holland.
Milkman, Chugh, & Bazerman (2010). How can decision making be improved? Perspectives on Psychological Science 4: 379-383.
Mussweiler, Strack, & Pfeiffer (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26: 1142–50.
Oreg & Bayazit. Prone to Bias: Development of a Bias Taxonomy From an Individual Differences Perspective. Review of General Psychology, 3: 175-193.
Over (2004). Rationality and the normative/descriptive distinction. In Koehler & Harvey (eds.), Blackwell handbook of judgment and decision making (pp. 3–18). Blackwell Publishing.
Peetz, Buehler & Wilson (2010). Planning for the near and distant future: How does temporal distance affect task completion predictions? Journal of Experimental Social Psychology, 46: 709-720.
Perkins (1995). Outsmarting IQ: The emerging science of learnable intelligence. Free Press.
Pezzo, Litman, & Pezzo (2006). On the distinction between yuppies and hippes: Individual differences in prediction biases for planning future tasks. Personality and Individual Differences, 41: 1359-1371.
Reimers & Butler (1992). The effect of outcome knowledge on auditor's judgmental evaluations. Accounting, Organizations and Society, 17: 185–194.
Richerson & Boyd (2005). Not By Genes Alone: How Culture Transformed Human Evolution. University of Chicago Press.
Ross, Greene, & House (1977). The false consensus phenomenon: An attributional bias in self-perception and social perception processes. Journal of Experimental Social Psychology, 13: 279–301.
Roy, Christenfeld, & McKenzie (2005). Underestimating the duration of future events: Memory incorrectly used or memory bias? Psychological Bulletin, 131: 738-756.
Sedlmeier (1999). Improving Statistical Reasoning: Theoretical Models and Practical Implications. Erlbaum.
Shafir & LeBoeuf (2002). Rationality. Annual Review of Psychology, 53: 491–517.
Slovic (2007). If I look at the mass I will never act: Psychic numbing and genocide. Judgment and Decision Making, 2: 1–17.
Slovic, Zionts, Woods, Goodman, & Jinks (2011). Psychic numbing and mass atrocity. In E. Shafir (ed.), The behavioral foundations of policy. Sage and Princeton University Press.
Small & Loewenstein (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26: 5–16.
Small, Loewenstein, & Slovic (2007). Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102: 143–153.
Soll & Klayman (2004). Overconfidence in interval estimates. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30: 299–314.
Stanovich (1999). Who is rational? Studies of individual differences in reasoning. Erlbaum.
Stanovich (2009). What Intelligence Tests Miss: The Psychology of Rational Thought. Yale University Press.
Stanovich & West (2008). On the failure of cognitive ability to predict myside bias and one-sided thinking biases. Thinking and Reasoning, 14: 129–167.
Stanovich, Toplak, & West (2008). The development of rational thought: A taxonomy of heuristics and biases. Advances in Child Development and Behavior, 36: 251-285.
Stanovich, West, & Toplak (2010). Individual differences as essential components of heuristics and biases research. In Manktelow, Over, & Elqayam (eds.), The Science of Reason: A Festschrift for Jonathan St B.T. Evans (pp. 355-396). Psychology Press.
Stanovich, West, & Toplak (2011). Intelligence and rationality. In Sternberg & Kaufman (eds.), Cambridge Handbook of Intelligence, 3rd edition (pp. 784-826). Cambridge University Press.
Staw (1976). Knee-deep in the big muddy: a study of escalating commitment to a chosen course of action. Organizational Behavior and Human Performance, 16: 27-44.
Sternberg (1985). Beyond IQ: A triarchic theory of human intelligence. Cambridge University Press.
Sternberg (1997). Thinking Styles. Cambridge University Press.
Sternberg (2003). Wisdom, intelligence, and creativity synthesized. Cambridge University Press.
Taylor, Pham, Rivkin & Armor (1998). Harnessing the imagination: Mental simulation, self-regulation, and coping. American Psychologist, 53: 429–439.
Teger (1980). Too Much Invested to Quit. Pergamon Press.
Tversky & Kahneman (1979). Intuitive prediction: Biases and corrective procedures. TIMS Studies in Management Science, 12: 313-327.
Tversky & Kahneman (1981). The framing of decisions and the psychology of choice. Science, 211: 453–458.
Unsworth & Engle (2005). Working memory capacity and fluid abilities: Examining the correlation between Operation Span and Raven. Intelligence, 33: 67–81.
Whyte (1986). Escalating Commitment to a Course of Action: A Reinterpretation. The Academy of Management Review, 11: 311-321.
Wu, Zhang, & Gonzalez (2004). Decision under risk. In Koehler & Harvey (eds.), Blackwell handbook of judgment and decision making (pp. 399–423). Blackwell Publishing.