Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.
Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario, or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area.2 People visualize “a single exhausted bird, its feathers soaked in black oil, unable to escape.”3 This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay—and the image is the same in all cases. As for scope, it gets tossed out the window—no human can visualize 2,000 birds at once, let alone 200,000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay—perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as “valuation by prototype.”
An alternative hypothesis is “purchase of moral satisfaction.” People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.
We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1,000—a factor of 600—increased willingness-to-pay from $3.78 to $15.23.4 Baron and Greene found no effect from varying lives saved by a factor of 10.5
A paper entitled “Insensitivity to the value of human life: A study of psychophysical numbing” collected evidence that our perception of human deaths follows Weber’s Law—obeys a logarithmic scale where the “just noticeable difference” is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.6
The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.
1 William H. Desvousges et al., Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy, technical report (Research Triangle Park, NC: RTI International, 2010).
2 Daniel Kahneman, “Comments by Professor Daniel Kahneman,” in Valuing Environmental Goods: An Assessment of the Contingent Valuation Method, ed. Ronald G. Cummings, David S. Brookshire, and William D. Schulze, vol. 1.B, Experimental Methods for Assessing Environmental Benefits (Totowa, NJ: Rowman & Allanheld, 1986), 226–235; Daniel L. McFadden and Gregory K. Leonard, “Issues in the Contingent Valuation of Environmental Goods: Methodologies for Data Collection and Analysis,” in Contingent Valuation: A Critical Assessment, ed. Jerry A. Hausman, Contributions to Economic Analysis 220 (New York: North-Holland, 1993), 165–215.
3 Daniel Kahneman, Ilana Ritov, and David Schkade, “Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues,” Journal of Risk and Uncertainty 19, nos. 1–3 (1999): 203–235.
4 Richard T. Carson and Robert Cameron Mitchell, “Sequencing and Nesting in Contingent Valuation Surveys,” Journal of Environmental Economics and Management 28, no. 2 (1995): 155–173.
5 Jonathan Baron and Joshua D. Greene, “Determinants of Insensitivity to Quantity in Valuation of Public Goods: Contribution, Warm Glow, Budget Constraints, Availability, and Prominence,” Journal of Experimental Psychology: Applied 2, no. 2 (1996): 107–125.
6 David Fetherstonhaugh et al., “Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing,” Journal of Risk and Uncertainty 14, no. 3 (1997): 283–300.
There is much counterevidence in the literature as well, but more importantly the literature does not clearly suggest the extent to which people are scope sensitive when they are (which is often), nor does it suggest what normative sensitivity might look like given the complexities of the decision problems and of human preferences. The literature doesn't tell us the extent to which self-identifying total-utilitarian-style altruists in particular are scope sensitive, nor what methods of assigning WTP values they use. Whether or not their decisions are normative according to their professed optimization criteria, and more importantly whether their decisions are more or less normative than a naive "shut up and multiply the salient numbers" approach, is unknown.
A naive total utilitarian approach is clearly lacking. There are always hidden and unmentioned complexities like predetermined ecological niche sizes, i.e. 50 saved birds will quickly breed so as to fill a niche whereas 5,000 birds will remain at the limits. The difference between 1,000 out of 50,000 versus 1,000 out of 2,000 human lives saved is a substantial difference: realistic attempts at either will look very different from each other. Logarithmic scaling is common and can be a natural result of (implicit) consideration of conjunctions, exaggerations, credibility calculations (like whether it'd be easy or difficult to fake a positive result), baselines, opportunity costs, and so on; it is unclear what a normative evaluation of disutility from wars of various casualties would look like, but logarithmicness doesn't seem obviously wrong. (The different framings in the original paper suggest different metrics for evaluation; there's no reason to expect consistent valuations across levels of organization. "Deaths per day" offers an uncomplicated metric, "magnitude of war" prompts highly complex evaluations where log-normal distributions are significant.) Lives (alleged) to be saved affect utility calculations only additively, less than do estimated probabilities of internal successes or failures. In brief, a substantial amount of information is not represented by the numbers, and so substantial deviations from naive additive WTP values should be expected.
Naive total utilitarianism is a fast and frugal algorithm which ignores many considerations and makes no attempt to reach normative decisions. Whether it's more or less consistent with total utilitarians' values than more intuitive approaches is unclear, and which to prefer in the absence of such information is likewise unclear. Finally, don't forget that meta-level uncertainty about total utilitarianism should be taken into account.
ETA: I should highlight that there is much variance between subjects and between studies. I do not argue that some subjects in some studies don't simply purchase moral satisfaction or the like (though the research indicates this is uncommon), but I do argue that some non-negligible number of subjects in some non-negligible number of studies might be more effective altruists than any explicitly algorithm/equation-centered approach would allow for.
ETA2: The above analysis assumes that people's responses to surveys about why/how they made a decision or what affected them isn't generally correlated much with their actual decision processes. This assumption is reasonable and isn't necessary but it's not overwhelmingly disjunctive.