The affect heuristic is when subjective impressions of goodness/badness act as a heuristic—a source of fast, perceptual judgments. Pleasant and unpleasant feelings are central to human reasoning, and the affect heuristic comes with lovely biases—some of my favorites.
Let’s start with one of the relatively less crazy biases. You’re about to move to a new city, and you have to ship an antique grandfather clock. In the first case, the grandfather clock was a gift from your grandparents on your fifth birthday. In the second case, the clock was a gift from a remote relative and you have no special feelings for it. How much would you pay for an insurance policy that paid out $100 if the clock were lost in shipping? According to Hsee and Kunreuther, subjects stated willingness to pay more than twice as much in the first condition.1 This may sound rational—why not pay more to protect the more valuable object?—until you realize that the insurance doesn’t protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock. (And yes, it was stated that the insurance was with an outside company, so it gives no special motive to the movers.)
All right, but that doesn’t sound too insane. Maybe you could get away with claiming the subjects were insuring affective outcomes, not financial outcomes—purchase of consolation.
Then how about this? Yamagishi showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal.2 Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who’s more likely to survive than not.
But wait, it gets worse.
Suppose an airport must decide whether to spend money to purchase some new equipment, while critics argue that the money should be spent on other aspects of airport safety. Slovic et al. presented two groups of subjects with the arguments for and against purchasing the equipment, with a response scale ranging from 0 (would not support at all) to 20 (very strong support).3 One group saw the measure described as saving 150 lives. The other group saw the measure described as saving 98% of 150 lives. The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good—is that a lot? a little?—while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale. Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.
Or consider the report of Denes-Raj and Epstein: subjects who were offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl often preferred to draw from a bowl with more red beans and a smaller proportion of red beans.4 E.g., 7 in 100 was preferred to 1 in 10.
According to Denes-Raj and Epstein, these subjects reported afterward that even though they knew the probabilities were against them, they felt they had a better chance when there were more red beans. This may sound crazy to you, oh Statistically Sophisticated Reader, but if you think more carefully you’ll realize that it makes perfect sense. A 7% probability versus 10% probability may be bad news, but it’s more than made up for by the increased number of red beans. It’s a worse probability, yes, but you’re still more likely to win, you see. You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability.
As I discussed in “The Scales of Justice, the Notebook of Rationality,” Finucane et al. found that for nuclear reactors, natural gas, and food preservatives, presenting information about high benefits made people perceive lower risks; presenting information about higher risks made people perceive lower benefits; and so on across the quadrants.5 People conflate their judgments about particular good/bad aspects of something into an overall good or bad feeling about that thing.
Finucane et al. also found that time pressure greatly increased the inverse relationship between perceived risk and perceived benefit, consistent with the general finding that time pressure, poor information, or distraction all increase the dominance of perceptual heuristics over analytic deliberation.
Ganzach found the same effect in the realm of finance.6 According to ordinary economic theory, return and risk should correlate positively—or to put it another way, people pay a premium price for safe investments, which lowers the return; stocks deliver higher returns than bonds, but have correspondingly greater risk. When judging familiar stocks, analysts’ judgments of risks and returns were positively correlated, as conventionally predicted. But when judging unfamiliar stocks, analysts tended to judge the stocks as if they were generally good or generally bad—low risk and high returns, or high risk and low returns.
For further reading I recommend Slovic’s fine summary article, “Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics.”
1Christopher K. Hsee and Howard C. Kunreuther, “The Affection Effect in Insurance Decisions,” Journal of Risk and Uncertainty 20 (2 2000): 141–159.
2Kimihiko Yamagishi, “When a 12.86% Mortality Is More Dangerous than 24.14%: Implications for Risk Communication,” Applied Cognitive Psychology 11 (6 1997): 461–554.
3Paul Slovic et al., “Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics,” Journal of Socio-Economics 31, no. 4 (2002): 329–342.
4Veronika Denes-Raj and Seymour Epstein, “Conflict between Intuitive and Rational Processing: When People Behave against Their Better Judgment,” Journal of Personality and Social Psychology 66 (5 1994): 819–829.
5Finucane et al., “The Affect Heuristic in Judgments of Risks and Benefits.”
6Yoav Ganzach, “Judging Risk and Return of Financial Assets,” Organizational Behavior and Human Decision Processes 83, no. 2 (2000): 353–370.
The first terrifying shock comes when you realize that the rest of the world is just so incredibly stupid.
The second terrifying shock comes when you realize that they're not the only ones.
This one should be on a list of quotes from Less Wrong comments.