Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.
Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario, or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area.2 People visualize “a single exhausted bird, its feathers soaked in black oil, unable to escape.”3 This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay—and the image is the same in all cases. As for scope, it gets tossed out the window—no human can visualize 2,000 birds at once, let alone 200,000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay—perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as “valuation by prototype.”
An alternative hypothesis is “purchase of moral satisfaction.” People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.
We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1,000—a factor of 600—increased willingness-to-pay from $3.78 to $15.23.4 Baron and Greene found no effect from varying lives saved by a factor of 10.5
A paper entitled “Insensitivity to the value of human life: A study of psychophysical numbing” collected evidence that our perception of human deaths follows Weber’s Law—obeys a logarithmic scale where the “just noticeable difference” is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.6
The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.
1 William H. Desvousges et al., Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy, technical report (Research Triangle Park, NC: RTI International, 2010).
2 Daniel Kahneman, “Comments by Professor Daniel Kahneman,” in Valuing Environmental Goods: An Assessment of the Contingent Valuation Method, ed. Ronald G. Cummings, David S. Brookshire, and William D. Schulze, vol. 1.B, Experimental Methods for Assessing Environmental Benefits (Totowa, NJ: Rowman & Allanheld, 1986), 226–235; Daniel L. McFadden and Gregory K. Leonard, “Issues in the Contingent Valuation of Environmental Goods: Methodologies for Data Collection and Analysis,” in Contingent Valuation: A Critical Assessment, ed. Jerry A. Hausman, Contributions to Economic Analysis 220 (New York: North-Holland, 1993), 165–215.
3 Daniel Kahneman, Ilana Ritov, and David Schkade, “Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues,” Journal of Risk and Uncertainty 19, nos. 1–3 (1999): 203–235.
4 Richard T. Carson and Robert Cameron Mitchell, “Sequencing and Nesting in Contingent Valuation Surveys,” Journal of Environmental Economics and Management 28, no. 2 (1995): 155–173.
5 Jonathan Baron and Joshua D. Greene, “Determinants of Insensitivity to Quantity in Valuation of Public Goods: Contribution, Warm Glow, Budget Constraints, Availability, and Prominence,” Journal of Experimental Psychology: Applied 2, no. 2 (1996): 107–125.
6 David Fetherstonhaugh et al., “Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing,” Journal of Risk and Uncertainty 14, no. 3 (1997): 283–300.
This is only somewhat related, as it is less true of overtly political domains, but I am confused by the frequency with which seemingly reasonable methods support naively counter-intuitive conclusions against naively intuitive conclusions where ultimately the naively intuitive conclusions win, i.e. where bullet biting loses to traditionalism. E.g. mathematical or statistical arguments, even solid-seeming ones, often lose in practice due to leaving out important considerations which the brain's automatic algorithms don't miss.
Ironically this is especially true in the heuristic and biases literature where normative math is often misunderstood and experimental results are often misinterpreted. The weakness of the findings in the heuristics and biases literature undermines the most commonly cited support of the "the world is mad" hypothesis and so there is a lack of alternative wide-scale explanations for any perceived wide-spread irrationality. Lack of incentives for "rationality" in various domains remains a blanket explanation but it can explain almost anything and is perhaps unjustifiably hinged on a notion of rationality that might or might not be well-supported. In general any behavior can be explained away as a response to a set of incentives that does not include objective truth.
If conclusions reached via common human intuitions or epistemic practices are generally more valid than is suggested by their cited supporting arguments, and if uncommon epistemic practices often lead to conclusions that are less valid than those practices seem to suggest, then it may be wise for those who utilize uncommon epistemic practices to be relatively more wary of their uncommon conclusions and relatively more curious about possible explanations of common conclusions than they otherwise would have been. Scientism/falsificationism, Bayesianism, skepticism, and similar philosophically-inspired memeplexes are examples of sources of uncommon epistemic practices.