The chapter on judgment under uncertainty in the (excellent) new Oxford Handbook of Cognitive Psychology has a handy little section on recent critiques of the "heuristics and biases" tradition. It also discusses problems with the somewhat-competing "fast and frugal heuristics" school of thought, but for now let me just quote the section on heuristics and biases (pp. 608-609):

The heuristics and biases program has been highly influential; however, some have argued that in recent years the influence, at least in psychology, has waned (McKenzie, 2005). This waning has been due in part to pointed critiques of the approach (e.g., Gigerenzer, 1996). This critique comprises two main arguments: (1) that by focusing mainly on coherence standards [e.g. their rationality given the subject's other beliefs, as contrasted with correspondence standards having to do with the real-world accuracy of a subject's beliefs] the approach ignores the role played by the environment or the context in which a judgment is made; and (2) that the explanations of phenomena via one-word labels such as availability, anchoring, and representativeness are vague, insufficient, and say nothing about the processes underlying judgment (see Kahneman, 2003; Kahneman & Tversky, 1996 for responses to this critique).

The accuracy of some of the heuristics proposed by Tversky and Kahneman can be compared to correspondence criteria (availability and anchoring). Thus, arguing that the tradition only uses the “narrow norms” (Gigerenzer, 1996) of coherence criteria is not strictly accurate (cf. Dunwoody, 2009). Nonetheless, responses in famous examples like the Linda problem can be reinterpreted as sensible rather than erroneous if one uses conversational or pragmatic norms rather than those derived from probability theory (Hilton, 1995). For example, Hertwig, Benz and Krauss (2008) asked participants which of the following two statements is more probable:

[X] The percentage of adolescent smokers in Germany decreases at least 15% from current levels by September 1, 2003.

[X&Y] The tobacco tax in Germany is increased by 5 cents per cigarette and the percentage of adolescent smokers in Germany decreases at least 15% from current levels by September 1, 2003.

According to the conjunction rule, [X&Y cannot be more probable than X] and yet the majority of participants ranked the statements in that order. However, when subsequently asked to rank order four statements in order of how well each one described their understanding of X&Y, there was an overwhelming tendency to rank statements like “X and therefore Y” or “X and X is the cause for Y” higher than the simple conjunction “X and Y.” Moreover, the minority of participants who did not commit the conjunction fallacy in the first judgment showed internal coherence by ranking “X and Y” as best describing their understanding in the second judgment.These results suggest that people adopt a causal understanding of the statements, in essence ranking the probability of X, given Y as more probable than X occurring alone. If so, then arguably the conjunction “error” is no longer incorrect. (See Moro, 2009 for extensive discussion of the reasons underlying the conjunction fallacy, including why “misunderstanding” cannot explain all instances of the fallacy.)

The “vagueness” argument can be illustrated by considering two related phenomena: the gambler’s fallacy and the hot-hand (Gigerenzer & Brighton, 2009). The gambler’s fallacy is the tendency for people to predict the opposite outcome after a run of the same outcome (e.g., predicting heads after a run of tails when flipping a fair coin); the hot-hand, in contrast, is the tendency to predict a run will continue (e.g., a player making a shot in basketball after a succession of baskets; Gilovich, Vallone, & Tversky, 1985). Ayton and Fischer (2004) pointed out that although these two behaviors are opposite - ending or continuing runs - they have both been explained via the label “representativeness.” In both cases a faulty concept of randomness leads people to expect short sections of a sequence to be “representative” of their generating process. In the case of the coin, people believe (erroneously) that long runs should not occur, so the opposite outcome is predicted; for the player, the presence of long runs rules out a random process so a continuation is predicted (Gilovich et al., 1985). The “representativeness” explanation is therefore incomplete without specifying a priori which of the opposing prior expectations will result. More important, representativeness alone does not explain why people have the misconception that random sequences should exhibit local representativeness when in reality they do not (Ayton & Fischer, 2004).

 

My thanks to MIRI intern Stephen Barnes for transcribing this text.

New Comment
31 comments, sorted by Click to highlight new comments since: Today at 2:17 PM

Krynski & Tenenbaum (2007) propose an intriguing Bayesian explanation for base rate neglect and the role of causality therein.

As for the conjunction fallacy, there are also several experiments by Daniel Osherson and associates supporting its reality (in the sense of it not being due to a misunderstanding of the question), which are available on Osherson's website.

Great reference, thanks!

Eliezer's post Conjunction Controversy (Or, How They Nail It Down) also covers some of the research showing that the conjunction fallacy is not just due to misunderstandings.

It could be in part due to training.

Reyna & Brainerd's Fuzzy-trace Theory research suggested that the conjunction fallacy gets more common with age.

And given that some of my guess-the-teacher's-password heuristics more or less directly invoke conjunction fallacy (Given A, B, C, and D, where D is A+B, D is more likely correct than either A or B), I'm inclined to suspect that education might be strongly reinforcing the bias.

X cannot be more probable than X&Y and get the majority of participants ranked the statements in that order.

There seems to be a meaning-reversing typo in here.

You'd think so, except that the rest of the paragraph doesn't seem to make much sense if this is just a typo, does it? "X and therefore Y" and "X and X is the cause of Y" must both have smaller probability than "X", just like "X and Y". It seems to me that this is a thinko rather than a typo -- somebody really thought that the conjunction X&Y must have higher probability (like, it's X AND Y so it's probability is that of X PLUS the probability of Y, or something), while "X and therefore Y" must have lower probability than X. Or something.

Alternatively, perhaps someone felt that ranking "X and Y" interpreted as "X and therefore Y" as more probable than "X" could be interpreted as "the conditional probability of Y given X is higher than the probability of X"? But that seems like an extreme stretch of the words.

Reading the original source might clarify things [i.e., Hertwig, Benz and Krauss (2008)] -- unfortunately I don't have the time right now, anyone?

(BTW, I've always wondered whether, given that the conjunction is on the list as an "alternative" choice, subjects interpret "X" by itself as "X but not Y". I've always thought someone would have done experiments to test that idea, but I haven't looked into the literature deeply enough to know.)

(BTW, I've always wondered whether, given that the conjunction is on the list as an "alternative" choice, subjects interpret "X" by itself as "X but not Y". I've always thought someone would have done experiments to test that idea, but I haven't looked into the literature deeply enough to know.)

Actually, even though this explanation works for the oft-cited Linda case, it turns out that the locus classicus Kahneman & Tversky 1983 already contains several (versions of) experiments that yield conjunction-fallacy-type results but could not plausibly be interpreted in that way.

You'd think so, except that the rest of the paragraph doesn't seem to make much sense if this is just a typo, does it? [...] Alternatively, perhaps someone felt that ranking "X and Y" interpreted as "X and therefore Y" as more probable than "X" could be interpreted as "the conditional probability of Y given X is higher than the probability of X"? But that seems like an extreme stretch of the words.

I think that this was the intended meaning. I was also confused by that paragraph at first, but I settled on the same interpretation as you give here. Granted, it means that the comparison should have been between Y and X&Y, not between X and X&Y.

(BTW, I've always wondered whether, given that the conjunction is on the list as an "alternative" choice, subjects interpret "X" by itself as "X but not Y". I've always thought someone would have done experiments to test that idea, but I haven't looked into the literature deeply enough to know.)

The link to Moro 2009 in AlexSchell's comment discusses possibilities like this.

'get' should be 'yet.'

Just to be clear, that is not the typo that I was referring to. The sentence I quoted should read "X&Y cannot be more probable than X...". But maybe the mistake is in the source?

Oh yeah, good catch. I wasn't actually reading the sentence before. :)

It's wrong in the original text. I've contacted the author, and corrected it (in brackets) in the LW post.

You put your MP3 player on random. You have a playlist of 20 songs. What are the odds that the next song played is the same song which was just played?

In most people's experience, random sequences aren't. It's the idea of true randomness which doesn't conform to the reality people live in.

Related: Somebody flips a coin 100 times. It's come up heads each time. What are the odds it comes up heads on the 101st throw? If you're doing a probability problem, the answer is 50%. But in reality, would you bet even with 1000x1 odds in your favor on that 101st throw being a tails?

Somebody flips a coin 100 times. It's come up heads each time. What are the odds it comes up heads on the 101st throw? If you're doing a probability problem, the answer is 50%.

If the coin is fair, the answer is 50%... but what are the odds that the coin is fair, given that it's come up heads 100 times?

Related: Somebody flips a coin 100 times. It's come up heads each time. What are the odds it comes up heads on the 101st throw? If you're doing a probability problem, the answer is 50%. But in reality, would you bet even with 1000x1 odds in your favor on that 101st throw being a tails?

Well no, the answer isn't 50%. Apply Bayes Theorem, using 0.5 as the prior and the 100 coinflips as the conditional probability, and you basically get 1-epsilon, because the coin is most likely biased

This depends on your prior for the occurrence of biased coins! If you have 10 coins of which one is two-headed and the others normal, and you draw one and start flipping, it doesn't take many flips to be pretty sure you have the two-headed coin. But if biased coins are very rare, it takes a lot more flips.

Given 2^100 odds, it's more likely the person flipping the coin is using the double-headed quarter I tossed into a wishing well in Mexico ten years previously than that the flips were entirely natural.

It works out to 6.338*10^29 against, assuming we're not favoring a series of heads over tails. At those odds, a casting mistake resulting in a chunk of ferrous material being embedded in the coin and a magnetic anomaly caused by the alignment of the microwave and the toaster and the fact that the television happens to be tuned to channel 29 with a volume setting of 9 start to become viable contenders as explanations.

If I understand correctly what he meant by “a probability problem”, your prior that the coin is biased is 0.

I meant "The 'correct' answer when you're taking a probability class or are taking part in a study examining gambling fallacies and don't want to be counted as one of the people who 'clearly' doesn't understand probabilities."

That said, I actually had a decent probability and statistics coursework instructor who would have marked "~100%" correct if a decent explanation were provided for the answer. (I answered problems that way all the time, although I don't think that exact question turned up in any tests even though it did turn up in the probability textbook.)

I never took any test with that question with “a coin 100 times”, but I did have one with red/black on a roulette wheel (which I assume would be much harder to fudge than a coin) ten (IIRC) times. (And I answered 18/37 -- the zero is neither black nor red).

You forgot the 00, although it depends on whether you're playing European or US roulette.

Actually, I love roulette. And yes, it's much harder to fudge than a coin; the best that can be managed would be better described as a "nudge" - the timing of the dealer's throw can make a tiny % of difference.

The skill necessary to land the ball exactly where the dealer wants it would be superhuman, but as my dad commented (describing practicing with throwing knives and throwing stars), practice has a tendency to make you luckier. (Yes, my homeschooling lessons involved throwing knives.)

You put your MP3 player on random. You have a playlist of 20 songs. What are the odds that the next song played is the same song which was just played?

I think the option is more typically called "shuffle", which actually accurately represents what it does.

I once had a MP3 player where “random” actually did what it said (for some value of “actually”), including playing the same song twice in a row once in a while.

I did too (up to and including the "some value of actually," as it played 20% of the songs 90% of the time), which is why I brought up that example. It annoyed me to no end that the button did what it said it did.

In general, interfaces should reflect what the user expects the interface to be, rather than what the designer expects the user to interpret the interface to be.

Actually, I kind-of liked its unpredictability. When I didn't want to listen to a song for a second time (and sometimes I did), I just skipped it.

[-][anonymous]11y40

Related: Somebody flips a coin 100 times. It's come up heads each time. What are the odds it comes up heads on the 101st throw? If you're doing a probability problem, the answer is 50%. But in reality, would you bet even with 1000x1 odds in your favor on that 101st throw being a tails?

In reality, I wouldn't bet either way. It's very likely that there's a trick where the coinflip will go against whichever way I bet.

So, you no box on Newcomb's Problem? :)

[-][anonymous]11y00

Given the vast understanding we're now coming to understand and appreciate with respect to our own cognitive biases, I'm wondering if it's even possible to form a logically coherent statement, of any kind, that neither commits any of the known logical fallacies and cognitive biases. What would something like that even look like, and is it even possible, or will the furthest we'll ever get is just minimizing our biases as much as we can to try and approximate our answers and results the best we can in mapping them onto reality?

Thanks to both of you, this was very interesting.

I figure any growing field will make undersupported or conflicting claims occasionally. I'm glad to read that this thorough-seeming review didn't find more in ours.