Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.
Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario, or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area.2 People visualize “a single exhausted bird, its feathers soaked in black oil, unable to escape.”3 This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay—and the image is the same in all cases. As for scope, it gets tossed out the window—no human can visualize 2,000 birds at once, let alone 200,000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay—perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as “valuation by prototype.”
An alternative hypothesis is “purchase of moral satisfaction.” People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.
We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1,000—a factor of 600—increased willingness-to-pay from $3.78 to $15.23.4 Baron and Greene found no effect from varying lives saved by a factor of 10.5
A paper entitled “Insensitivity to the value of human life: A study of psychophysical numbing” collected evidence that our perception of human deaths follows Weber’s Law—obeys a logarithmic scale where the “just noticeable difference” is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.6
The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.
1 William H. Desvousges et al., Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy, technical report (Research Triangle Park, NC: RTI International, 2010).
2 Daniel Kahneman, “Comments by Professor Daniel Kahneman,” in Valuing Environmental Goods: An Assessment of the Contingent Valuation Method, ed. Ronald G. Cummings, David S. Brookshire, and William D. Schulze, vol. 1.B, Experimental Methods for Assessing Environmental Benefits (Totowa, NJ: Rowman & Allanheld, 1986), 226–235; Daniel L. McFadden and Gregory K. Leonard, “Issues in the Contingent Valuation of Environmental Goods: Methodologies for Data Collection and Analysis,” in Contingent Valuation: A Critical Assessment, ed. Jerry A. Hausman, Contributions to Economic Analysis 220 (New York: North-Holland, 1993), 165–215.
3 Daniel Kahneman, Ilana Ritov, and David Schkade, “Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues,” Journal of Risk and Uncertainty 19, nos. 1–3 (1999): 203–235.
4 Richard T. Carson and Robert Cameron Mitchell, “Sequencing and Nesting in Contingent Valuation Surveys,” Journal of Environmental Economics and Management 28, no. 2 (1995): 155–173.
5 Jonathan Baron and Joshua D. Greene, “Determinants of Insensitivity to Quantity in Valuation of Public Goods: Contribution, Warm Glow, Budget Constraints, Availability, and Prominence,” Journal of Experimental Psychology: Applied 2, no. 2 (1996): 107–125.
6 David Fetherstonhaugh et al., “Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing,” Journal of Risk and Uncertainty 14, no. 3 (1997): 283–300.
Hmm... pinging my head for a plausible reason for why I would rate one health program higher or lower this math popped out: Program A promised to save 4,500 / 11,000 refugees; Program B promised to save 4,500 / 250,000 refugees. Program A has a significantly higher "success rate." Since I know nothing about how health programs work the potentially naive request that Program A is chosen and sent to work at Site B. Why wouldn't its success rate work with larger numbers? I assume that reality has a few gotchas but I can see the mental reasoning there.
Likewise, for the disease cures, it would make more sense to work on a cure that had a much higher success rate. A cure that works 90% is "better" than a cure that works 10% of the time. The math in terms of lives saved will frustrate the dying and those who care about them but the value placed on the cure may not be counting lives saved. In these examples, the scope problem may be pointing toward the researchers and the participants valuing different things instead of the participants values breaking down around large numbers.
I am interested in comparing Program A (4,500 / 11,000 refugees saved) to a Program C (100,000 / 250,000). The ratios are much closer (41% saved and 40%, respectively). Also, merely asking the question, "Which cure is more valuable?" and listing the cures with different stats. Would this be enough to learn of any correlations between the amount of support and the perceived value/success of the options?
Another experiment could explicitly instruct people to assign money to Programs A, B, and C with the goal of saving the most people. Presumably this will help the participants switch whatever values they have with the values of saving lives. Would the results be different? Why or why not?
This certainly does not apply to the oiled birds or protecting wilderness. Also of note, I did not read any of the linked articles. Perhaps my questions are answered there?
I don't see how the "potentially naive request" translates to this setting. Say there is a potential cure for disease A which saves 4,500 people of 11,000 afflicted, and a potential cure for disease B which saves 9,000 people of 200,000 afflicted (just to make up some numbers where each potential cure is strictly better along one of the two axes). What's the argument for working on the cure for disease A, rather than for disease B?
(I... (read more)