The same idea goes for insisting that the charity you donate to is actually good at its mission. If you get your warm glow from the image of yourself as a good person, and if your dollars follow your glow, then competition among charitable organizations will take the form of trying to get good at triggering that self-image. If you get your glow from results, and if your dollars follow that, then charities will have much better incentives.
Good point. But it is not a matter of course that one can decide where he gets his glow from. So if you think that you get your glow from results, then you might just get your glow from believing that you are very smart, or at least smarter than most people, and thereby trigger your self-image yourself, wich allmost certainly will make you blind to anything that suggests the opposit (dunning kruger effekt) and that is very dangerous. I believe that instead of chasing comfort in the form of glow, one should be very skeptical of the glow and do what one wants rather than what is comfortable.
Your conclusion matches your data, but the data is suspiciously focused on charity. Is scope neglect easier to elicit in such contexts? Other explanations include it being hard to make large numbers relevant, and lack of imagination by researchers.
Douglas, I understand that scope insensitivity decreases substantially, but does not go away entirely, when personal profits are at stake.
It's not easy to devise experiments that distinguish unambiguously between the explanations that center around prototype-dominated affect, versus the warm glow of moral satisfaction. It seems pretty likely that both effects are at work.
How do people react if told "Here is a fixed amount of cash, that must go to charity. How do you wish it to be spent?"
Might that not distinguish "purchase of moral satisfaction" from "scope neglect"?
I strongly favor the "warm glow" explanation, but I'd take it a step further.
For most people, the warm glow is only worth it if they get social credit.
Those yellow LiveStrong bracelets are a great example. They're about $1 or so, and purchasers wear them around all day advertising that they care about cancer. How many of those people would have donated an equivalent amount (just a buck) without the badge of caring they get to wear around?
Those yellow LiveStrong bracelets are a great example. They're about $1 or so, and purchasers wear them around all day advertising that they care about cancer. How many of those people would have donated an equivalent amount (just a buck) without the badge of caring they get to wear around?
Actually, in my experience it's the other way round - people feel they're doing their bit just by wearing the bracelets, so they'll pay less for a bracelet than they'd donate anonymously.
But like most anecdotes, that one story doesn't tell you anything - we need statistics if we want to truly know how people behave.
I'm not sure I buy that this is completely about scope insensitivity rather than marginal utility and people thinking in terms of their fair share of a kantian solution. Or put differently, I think the scope insensitivity is partly inherent in the question, rather than a bias of the people answering.
Let's say I'd be willing to spend $100 to save 10 swans from gruesome deaths. How much should I, personally, be willing to spend to save 100 swans from the same fate? $1000? $10,000 for 1,000 swans? What about 100,000 swans -- $1,000,000?
But I don't have $1,000,000, so I can't agree to spend that much, even if I believe that it is somehow intrinsically worth that much. When I'm looking at what I personally spend, I'm comparing my ideas about the value of saving swans to the personal utility I give up by spending that money. $100 is a night out. $1000 is a piece of furniture or a small vacation. $10,000 is a car or a year's rent. $100,000 is a big chunk of my net worth and a sizable percentage of what I consider FU money. As I go up the scale my pain increases non-linearly, and my personal pain is what I'm measuring here.
So considering a massive problem like saving 2 million swans, I might take the Kantian approach. If say, 10% of people were willing to put $50 toward it, that seems like it would be enough money, so I'll put $50 toward it figuring that I'd rather live in a world where people are willing to do that than not.
Like many interpretations of studies like this, I think you're pulling to trigger on an irrationality explanation too fast. I believe that what people are thinking here is much more complicated than you're giving them credit for and with an appropriate model their responses might not appear to be innumerate.
It's a hard question to ask in a way that scales appropriately, because money only has value based on scarcity, so you can't say "If you are emperor of a region with unlimited money to spend, what it is worth to save N swans?" because the answer is just "as much as it takes". Money only has value if it is scarce, and what you're really interested in is "Using 2007 US dollars as units: How much other consumption should be foregone to save N swans?". But people can only judge that accurately from their own limited perspective where they have only so much consumption capacity to go around.
You point out a potential flaw in the reasoning for concluding 'scope insensitivity'. But you then seem to go off into saying that 'scope insensitivity is incorrect', and I don't think you supported that claim enough. Remember, reversed stupidity is not intelligence.
Exactly what I was thinking while I was reading this! Perhaps the example used isn't a good one.
While I agree with your point, I think the big takeaway here is that humans are not always capable of understanding massive scales. Our universe is one such example where our minds just cannot comprehend galactic scales. Yes, there is a pulling of the trigger, as you say, but I think a more reasonable learning here is that after certain lengths numbers just stop making sense to us.
I perceive that I've neglected to convey the existence of a gigantic body of supporting evidence.
Michael Sullivan, see e.g. http://www.sas.upenn.edu/~baron/cv1.htm:
Embedding. Kahneman and Knetsch (1992) asked some subjects their WTP for improved disaster preparedness and other subjects their WTP for improved rescue equipment and personnel. The improved equipment and personnel were thusembedded in'' the improved disaster preparedness, so the preparedness included the equipment and personnel, and other things too. WTP was, however, about the same for the larger good and for the smaller good included in it. Kahneman and Knetsch called this the
perfect embedding effect,'' presumably because a demonstration of it requires perfect equality of WTP of the two goods. When subjects were asked their WTP for the smaller good after they had just been asked about the larger one, they gave much smaller values for the smaller good than for the larger one, and much smaller values than those given by subjects who were asked just about the smaller good. This order effect is called theregular embedding effect.'' It demonstrates that a good seen as embedded in a larger good has reduced value. Kemp & Maxwell (1993) replicated this regular embedding effect, starting with a broad spectrum of public goods, and narrowing the good down in several steps, obtaining WTPs for an embedded good that were 1/300 of WTP for the same good in isolation. Adding up. In a related demonstration, Diamond et al. (1993), asked subjects their WTP values for preventing timber harvesting in federally protected wilderness areas. WTP for the prohibition in three areas was not much (if any) higher than WTP for prohibition in one of the areas alone. This result cannot be explained by assuming that subjects thought that protection of one area was sufficient: when they were asked their WTP to protect one area assuming that another was already protected, or a third area assuming that two were protected, their WTP values were just as high as those for protecting the first area. More generally, in this kind of
adding-up effect,'' respondents are asked their WTP for good A (e.g., a single wilderness area), for good B assuming that good A has been provided already, and for goods A and B together. WTP for A and B together is much lower than the sum of the WTP for A and for B (with A provided). Schulze, McClelland, & Lazo (1994) found similar results in a within-subject design: each subject rated A, B, and A and B together.
I perceive that I've neglected to convey the existence of a gigantic body of supporting evidence.
There is much counterevidence in the literature as well, but more importantly the literature does not clearly suggest the extent to which people are scope sensitive when they are (which is often), nor does it suggest what normative sensitivity might look like given the complexities of the decision problems and of human preferences. The literature doesn't tell us the extent to which self-identifying total-utilitarian-style altruists in particular are scope sensitive, nor what methods of assigning WTP values they use. Whether or not their decisions are normative according to their professed optimization criteria, and more importantly whether their decisions are more or less normative than a naive "shut up and multiply the salient numbers" approach, is unknown.
A naive total utilitarian approach is clearly lacking. There are always hidden and unmentioned complexities like predetermined ecological niche sizes, i.e. 50 saved birds will quickly breed so as to fill a niche whereas 5,000 birds will remain at the limits. The difference between 1,000 out of 50,000 versus 1,000 out of 2,000 human lives saved is a substantial difference: realistic attempts at either will look very different from each other. Logarithmic scaling is common and can be a natural result of (implicit) consideration of conjunctions, exaggerations, credibility calculations (like whether it'd be easy or difficult to fake a positive result), baselines, opportunity costs, and so on; it is unclear what a normative evaluation of disutility from wars of various casualties would look like, but logarithmicness doesn't seem obviously wrong. (The different framings in the original paper suggest different metrics for evaluation; there's no reason to expect consistent valuations across levels of organization. "Deaths per day" offers an uncomplicated metric, "magnitude of war" prompts highly complex evaluations where log-normal distributions are significant.) Lives (alleged) to be saved affect utility calculations only additively, less than do estimated probabilities of internal successes or failures. In brief, a substantial amount of information is not represented by the numbers, and so substantial deviations from naive additive WTP values should be expected.
Naive total utilitarianism is a fast and frugal algorithm which ignores many considerations and makes no attempt to reach normative decisions. Whether it's more or less consistent with total utilitarians' values than more intuitive approaches is unclear, and which to prefer in the absence of such information is likewise unclear. Finally, don't forget that meta-level uncertainty about total utilitarianism should be taken into account.
ETA: I should highlight that there is much variance between subjects and between studies. I do not argue that some subjects in some studies don't simply purchase moral satisfaction or the like (though the research indicates this is uncommon), but I do argue that some non-negligible number of subjects in some non-negligible number of studies might be more effective altruists than any explicitly algorithm/equation-centered approach would allow for.
ETA2: The above analysis assumes that people's responses to surveys about why/how they made a decision or what affected them isn't generally correlated much with their actual decision processes. This assumption is reasonable and isn't necessary but it's not overwhelmingly disjunctive.
how many lives an action saves is less important than the emotional connotations of the act which takes the lives. take micronutrient dispersal programs vs terrorism. malnutrition kills orders of magnitude more people and yet far more money is spent on terrorist prevention (well, mostly terrorist prevention signaling, but that's another topic). This is because fighting terrorism is more exciting than fighting scurvy. The order of magnitude difference in impact is ignored when evaluating which thing to spend money on. This makes choosing terrorism easier as saving 100 people from terrorism is much better for public relations than saving 100 random kids with goiters.
I came across an interesting book that includes the topic of scope insensitivity: "Determining the value of non-marketed goods: economics, psychological, and policy relevant aspects of contingent valuation methods" by Raymond J. Kopp, Werner W. Pommerehne, Norbert Schwarz. They suggest that while scope insensitivity on surveys is possible, it is not inevitable.
After providing an impressive list of studies rejecting the insensitivity hypothesis, they highlight two in particular: "First, the scope insensitivity hypothesis is strongly rejected (p<.001) by two large recent in-person contingent valuation studies, Carson, Wilks and Imber (1994) and Carson et al. (1994), which used extensive visual aids and very clean experimental designs to value goods thought to have substantial passive use considerations."
In order to prevent scope insensitivity, they suggest that the "respondent must (i) clearly understand the characteristics of the good they are asked to value, (ii) find the CV scenario elements related to the good's provision plausible, and (iii) answer the CV questions in a deliberate and meaningful manner."
The world of business tends to emphasize pattern over particular. But intellectual aspects of pattern prevents people from caring.
So the marketer would win by using the sad and more concrete image of the oily bird, persuading more people by means of the Ludic fallacy.
A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.
Hmm... pinging my head for a plausible reason for why I would rate one health program higher or lower this math popped out: Program A promised to save 4,500 / 11,000 refugees; Program B promised to save 4,500 / 250,000 refugees. Program A has a significantly higher "success rate." Since I know nothing about how health programs work the potentially naive request that Program A is chosen and sent to work at Site B. Why wouldn't its success rate work with larger numbers? I assume that reality has a few gotchas but I can see the mental reasoning there.
Likewise, for the disease cures, it would make more sense to work on a cure that had a much higher success rate. A cure that works 90% is "better" than a cure that works 10% of the time. The math in terms of lives saved will frustrate the dying and those who care about them but the value placed on the cure may not be counting lives saved. In these examples, the scope problem may be pointing toward the researchers and the participants valuing different things instead of the participants values breaking down around large numbers.
I am interested in comparing Program A (4,500 / 11,000 refugees saved) to a Program C (100,000 / 250,000). The ratios are much closer (41% saved and 40%, respectively). Also, merely asking the question, "Which cure is more valuable?" and listing the cures with different stats. Would this be enough to learn of any correlations between the amount of support and the perceived value/success of the options?
Another experiment could explicitly instruct people to assign money to Programs A, B, and C with the goal of saving the most people. Presumably this will help the participants switch whatever values they have with the values of saving lives. Would the results be different? Why or why not?
This certainly does not apply to the oiled birds or protecting wilderness. Also of note, I did not read any of the linked articles. Perhaps my questions are answered there?
By that math, saving one person with 100% probability is worth the same as saving the entire population of earth with 100% probability, is it not?
Likewise, for the disease cures, it would make more sense to work on a cure that had a much higher success rate.
I don't see how the "potentially naive request" translates to this setting. Say there is a potential cure for disease A which saves 4,500 people of 11,000 afflicted, and a potential cure for disease B which saves 9,000 people of 200,000 afflicted (just to make up some numbers where each potential cure is strictly better along one of the two axes). What's the argument for working on the cure for disease A, rather than for disease B?
(I'm not going to argue with the "send Program A to work at Site B" argument, but I am also skeptical that many people in the study actually took it into account.)
Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [1]. This is scope insensitivity or scope neglect: the number of birds saved - the scope of the altruistic action - had little effect on willingness to pay.
An alternate interpretation is that people conceptualize the problem not in terms of absolute number of birds saved, but in terms of the fraction of birds saved. And they have no idea how many migrating birds there are. Presenting the number 2000, or 200,000, probably suggests to them that that's on the order of how many migrating birds there are.
Keyword connection for literature searches: Loss Aversion
Jonah Lehrer has blogged recently about what he described as loss aversion -- doctors will take more risks if the same problem is framed as reducing loss of life rather than saving lives.
He summarizes some of the same papers mentioned in the post and also a new Feb 2010 PNAS paper, "Amygdala damage eliminates monetary loss aversion".
The ideas overlap with those in Circular Altruism, I'm not sure which post I originally meant to make this comment under.
Vegetarianism is similar. I know many vegetarians who only think about the poor cow who now is served as dinner instead of the thousands of animals who are killed by pesticides, fertilizers, and mechanized farming equipment needed to grow a bowl of soy beans.
We should not make decisions based on emotional reactions. They do not scale.
I haven't read the studies. I'd like your opinion on the following idea. Could it be that the way to ask the question relates to the type of curve you get? Could you lead someone to come up with a linear ramp-up of money?
Also: how does the amount the subjects stated compare to the actual cost? If I have to save one bird, it might cost me a few hundred dollars in travel expenses, etc. But saving two birds is only slightly more.
Vegetarianism is similar. I know many vegetarians who only think about the poor cow who now is served as dinner instead of the thousands of animals who are killed by pesticides, fertilizers, and mechanized farming equipment needed to grow a bowl of soy beans.
If they did, would their opinion change?
I think mining is nasty, dirty, and dangerous. But I love uranium mining, even though the ore is radioactive. Why? Because each kilogram of uranium ore you pull out of the ground replaces at least ten* kilograms of coal. Uranium mining represents a net reduction to the total amount of mining that happens (with a constant energy load).
Likewise, when you go from growing plants to feed a cow to feed a human to growing plants to feed a human, you reduce the amount of plants necessary at least tenfold,* which similarly sounds like a tenfold reduction in the animals killed by farming processes.
So the thing that vegetarians aren't thinking about strengthens their argument. Are you sure you're thinking clearly about this issue, instead of trying to score points?
* I don't have the time/energy to look up the actual numbers at the moment- I'm >98% confident they're over 10 times, and strongly suspect they're less than 100.
Yes 10 times as many plants need to be grown but the harvest methods are quite different.
A cow provides fertilizer(manure) and the farming equipment(it eats the grass there) .
I suspect based on my recollections sun->plants is 1% and plants->animals is 10% and animal->animal is also 10%.
Also per kg meat is more dense so you are shipping less of it.
A cow provides fertilizer(manure) and the farming equipment(it eats the grass there) .
That's for free range grass fed cattle. I doubt that is >10% of the beef market.
true.
I am from Australia though.
http://www.anra.gov.au/topics/agriculture/beef/index.html
20 million total cows vs half a million in feedlots.
http://micpohling.wordpress.com/2007/04/08/world-top-15-country-on-highest-number-of-cattle/
Brazil is one of the countries with the most cows
http://beefmagazine.com/mag/beef_brazilian_beef/
One “missing picture” in the Brazilian cattle industry though, is that of a North American-style feedlot. Only 4% of the cattle killed each year are “fattened” in feedlots. With Europe being Brazil's main beef export market, the majority is grown to finish under a hormone-free regime on grass pastures.
Likewise, when you go from growing plants to feed a cow to feed a human to growing plants to feed a human, you >reduce the amount of plants necessary at least tenfold,* which similarly sounds like a tenfold reduction in the animals >killed by farming processes.
This is the main motivation for many vegetarians, from an energy reduction perspective. Ten times (approximately) more plants means ten times (approximately) the energy taken for the same amount of food/energy for the consumer.
So the thing that vegetarians aren't thinking about strengthens their argument.
This is only somewhat related, as it is less true of overtly political domains, but I am confused by the frequency with which seemingly reasonable methods support naively counter-intuitive conclusions against naively intuitive conclusions where ultimately the naively intuitive conclusions win, i.e. where bullet biting loses to traditionalism. E.g. mathematical or statistical arguments, even solid-seeming ones, often lose in practice due to leaving out important considerations which the brain's automatic algorithms don't miss.
Ironically this is especially true in the heuristic and biases literature where normative math is often misunderstood and experimental results are often misinterpreted. The weakness of the findings in the heuristics and biases literature undermines the most commonly cited support of the "the world is mad" hypothesis and so there is a lack of alternative wide-scale explanations for any perceived wide-spread irrationality. Lack of incentives for "rationality" in various domains remains a blanket explanation but it can explain almost anything and is perhaps unjustifiably hinged on a notion of rationality that might or might not be well-supported. In general any behavior can be explained away as a response to a set of incentives that does not include objective truth.
If conclusions reached via common human intuitions or epistemic practices are generally more valid than is suggested by their cited supporting arguments, and if uncommon epistemic practices often lead to conclusions that are less valid than those practices seem to suggest, then it may be wise for those who utilize uncommon epistemic practices to be relatively more wary of their uncommon conclusions and relatively more curious about possible explanations of common conclusions than they otherwise would have been. Scientism/falsificationism, Bayesianism, skepticism, and similar philosophically-inspired memeplexes are examples of sources of uncommon epistemic practices.
If three groups of subjects were asked how much they would pay to save 2000/20000/200000 birds... Was one group asked how much they would pay to save 2000 birds, another group asked how much they would pay to save 20000 birds, and the final group asked how much they would pay to save 200000 birds? Or was one group asked how much they would pay to save 2000, then 20000, then 200000 birds, and the experiment repeated on the other two groups? I didn't quite understand... I think I was reading too hard into the subtext. But I'm leaning towards the first one, can anyone elaborate?
This may be nitpicky but I found an errata in the references. [3] I believe should be 1993 instead of 1995.
That said, there are 3 broken links for me - [4], [6] and [7] - and the non-broken links don't seem to currently be providing full text access. So, here's an updated references table, with links to full text access in each except for the book in [3] which has an amazon link instead:
[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.
[2] Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.
[3] McFadden, D. and Leonard, G. 1993. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.
[4] Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.
[5] Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.
[6] Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.
[7] Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.
Please forgive this post here. There are some forgotten escaped characters and when I went to edit it, I ended up getting a separate post instead.
This may be nitpicky but I found an errata in the references. [3] I believe should be 1993 instead of 1995.
That said, there are 3 broken links for me - [4], [6] and [7] - and the non-broken links don't seem to currently be providing full text access. So, here's an updated references table, with links to full text access in each except for the book in [3] which has an amazon link instead:
[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.
[2] Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.
[3] McFadden, D. and Leonard, G. 1993. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.
[4] Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.
[5] Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.
[6] Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.
[7] Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.
From Abhijit V. Benerjee and Esther Duflo's Poor Economics,
Researchers gave students $5 to fill out a short survey. They then showed them a flyer and asked them to make a donation to Save the Children, one of the world’s leading charities. There were two different flyers. Some (randomly selected) students were shown this:Food shortages in Malawi are affecting more than 3 million children; In Zambia, severe rainfall deficits have resulted in a 42% drop in maize production from 2000. As a result, an estimated 3 million Zambians face hunger; Four million Angolans—one third of the population—have been forced to flee their homes; More than 11 million people in Ethiopia need immediate food assistance.
Other students were shown a flyer featuring a picture of a young girl and these words:Rokia, a 7-year-old girl from Mali, Africa, is desperately poor and faces a threat of severe hunger or even starvation. Her life will be changed for the better as a result of your financial gift. With your support, and the support of other caring sponsors, Save the Children will work with Rokia’s family and other members of the community to help feed her, provide her with education, as well as basic medical care and hygiene education. The first flyer raised an average of $1.16 from each student. The second flyer, in which the plight of millions became the plight of one, raised $2.83. The students, it seems, were willing to take some responsibility for helping Rokia, but when faced with the scale of the global problem, they felt discouraged.
Some other students, also chosen at random, were shown the same two flyers after being told that people are more likely to donate money to an identifiable victim than when presented with general information. Those shown the first flyer, for Zambia, Angola, and Mali, gave more or less what that flyer had raised without the warning—$1.26. Those shown the second flyer, for Rokia, after this warning gave only $1.36, less than half of what their colleagues had committed without it. Encouraging students to think again prompted them to be less generous to Rokia, but not more generous to everyone else in Mali.
Encouraging students to think again prompted them to be less generous to Rokia, but not more generous to everyone else in Mali.
Interesting study. What was the reason given for the warned students not giving more money to everyone in Mali?
The uncertainty in how many people would be saved anyway without intervention is (as an absolute number, not a percentage) much larger for the 250000 people case. If someone claims to save 4500 people, and the uncertainty is greater than 4500, I may be skeptical that they can save anyone at all.
Imagine it as medical trials instead. If I claim I can cure 4500 out of 250000 people it may be that I can't cure anyone at all and I'm just counting the spontaneous remissions as "cures". If I claim I can cure 4500 out of 11000 people, it's very unlikely that they would have all recovered spontaneously.
Do we value saving lives independently of the good feelings we get from it?
Sure: there's the issues of rewards, reputation and status to consider. The effect of saving lives on the former may scale somewhat linearly - but the effect on the others certainly does not.
Who are you to say for Z person that the value of saving X number of Y valuable commodity is linear?
Real world example: the fire alarm goes off in my apartment building at night about once every two weeks. Many people decide to stay in their room, as opposed to evacuating the building. They aren't understanding the magnitude of how bad it would be if there was a fire and they ended up getting seriously injured or dying. (There have been two real fires so far; the chance of a real fire is not trivial)
Counterpoint: do you understand the magnitude of how bad it would be if there was a fire and you ended up getting seriously injured or dying?
You continue to live in the apartment building which already had two fires and which has a malfunctioning alarm system.
I don't. I'm not scope sensitive. The alarm system is working fine, it's just that it's sensitive to people who are cooking (I think). I'm eager to move out ASAP though.
I hope you have renter's insurance, knowledge of a couple evacuation routes, and backups for any important data and papers and such.
I’m not questioning scope insensitivity in general here, but can someone explain to me why does it matter what number of birds they’re trying to save? Obviously, your contribution alone is not going to save them all (unless your’re rich and donating a lot of money), and, if you don’t know anything about how efficient those programs are, you may as well assume a fixed amount of money will save a fixed number of birds.
I think the original stipulation was not "how much would you give to a program saving X, Y or Z birds?", but "how much would you pay to save X, Y or Z birds?" in which the fixed amount of money is explicitly saving different numbers.
I have a question. (I’m not questioning the scope insensitivity though, well, kinda)
Let’s just say people would pay the same amount for 2000,20000,and 200000 birds saved. But wouldn’t 200000 birds cause a bigger reaction in the society so that more people would pay?
Let’s say there are 1000 people paying for 2000 birds (each 80 dollars). But 20000 would raise a stronger attention which leads to 10000 people willing to pay, same for 200000 birds saved.
I think. It also might be somehow related to the bystander effect, as people generally believe if there’s 200000 birds drowning from oil, there would be more of other people paying. Which gives them the feeling of: other people probably already paid more, why would I need to increase the number?
People tend to have a feeling along the lines of: “ they must have asked more people to donate for all the lakes in Ontario than just one area, if more people pay, then the result is the same, even if I pay only a little.”
If X is the number each person if willing to pay, Z is the amount of money needed to save the birds, there’s also another factor, Y, which is how many people are willing to pay.
If for 2000 birds X times Y is equals to Z
And the amount of birds increase to 20000
X doesn’t change
Z is ten times more than before
But isn’t Y also ten times more?
X times 10Y is equals to 10Z
So is it not a bias anymore? I just feel like the real situation might be more complex.
Although I’m not sure wheather my assumption of “the bigger social attention” is true. I don’t know anything, I’m still in high school. (English is also not my first language)
I can see natural situations where scope insensitivity seems to be the right place to be:
It even seems to me that any personal contribution must be intrinsically scope insensitive w.r.t. the denominator (the out of how many birds/humans/...), because any single person can't possibly pay alone for a solution of a problem that affects a billion humans.
Could one way of thinking about this, be that, decisions for loss aversion in the nth being from oneself, get less and less sensitive as our degrees of separation increase? My example would be the indiscriminate termination of the indigenous tribes of the Americas, on its 'discovery'.
I ran across a work of fiction that proposed an interesting hypothesis as to why we have some of this programming:
If you're a primitive tribesman, and something wipes out half your kin in a single incident, that's probably not something you can pick up your spear and hope to fight with any effectiveness. But if it gets just one or two, that might be something you can take on and win. And so as numbers grow larger we tend to grow numb to it and prefer avoidance over confrontation as a survival strategy.
I find the remark about the exponential increase in scope inducing a linear increase in willingness-to-pay perhaps being due to the number of zeroes quite amusing, and it leads me to speculate how a different base numbering system would change the willingness-to-pay.
I predict that given identical proficiency in any base b numbering system, a base-2 numbering system would decrease willingness-to-pay for an identical exponential increase in scope, and a base-16 numbering system would increase it, as a result of the shorter length representations!
I immediately and conclusively conclude that if we were to do away with our silly digits and embrace hexadecimality then the average human would be willing to part with x1.6 more units of purchasing power.
I suppose in a world where people have limited time and have other priorities, we often glaze over the numbers and don't think about what they really mean in terms of magnitude. I also think desensitization due to the mass media has something to do with it -- we are shown statistics and huge numbers all them time for scenarios much worse (war deaths, crimes, disease deaths), so a number as large as 200,000 birds saved wouldn't make anyone bat an eye -- it just becomes another number in the book.
My suggestion for alternative explanation is that people somehow assume that for saving more birds, more people will be asked to donate, so after dividing, the amounts per person will be very similar.
When donating, people think of their capacity. A person's capacity is obviously limited. There is only a finite amount of money a person can have.
When people answer that they would pay $78, they only expect to save like 10 birds, but not all birds. It is already the limit of their capacity for those birds. However many birds in danger, they can only expect to save 10 birds and the rest, if they are not personally witnessing it, they can only leave them to die.
Now, say if your organisation saving these birds is in possession of a time machine and they could fly back to the time of disaster to save the birds, you could ask people for how long they can contribute $78 each month. Then perhaps the answers and total amount would be different.
I call BS. There is an opportunity cost for passing up a chance to help some seabirds. Most people don't go through a given day being presented with lots of opportunities to save different numbers of seabirds. If they were, they'd do the math. Most people wouldn't assume that there were a dozen different seabird charities who are all pledging to help different numbers of seabirds because that's not the world we live in. If it was, people would process this differently. When people are presented with options, such as in stores where different brands compete for shelf space, they do tend to do the math
There maybe a much simpler explanation for seeming scope "insensitivity" in saturation. The differences in the lead example with the birds seems random variation to me. Probably most participants had some maximum amount they would ever consider parting with for non personal life threatening scenarios, and it doesn't take very many birds in the scenarios to reach their maximums. Seems like modeling artificially contrived scenarios like this is overthinking the issue without giving adequate thought to alternative more likely explanations.
Might this imply fault insensitivity too? For any given behavior "B" for every continued repeat of that behavior resulting in an impact "i-0" through "i-x", humans are only willing to curtail that behavior to a point despite the increasing impact of future actions?
Would scope neglect only affect include the application on altruism? What about applications to project scope and budgeting?
I do not know about scientific studies (which does not mean much), but at least anecdotally I think the answer is a yes at least for people who are not trained/experienced in making exactly these kinds of decisions.
One thing I have heard anecdotally is that people often significantly increase the prize when deciding to build/buy a house/car/vacation because they "are already spending lots of money, so who cares about adding 1% to the prize here and there to get neat extras" and thus spend years/months/days of income on things which they would not have bought if they had treated this as a separate decision.
This is a bit different from the bird-charity example, but it seems very related to me in that our intuitions have trouble with keeping track of absolute size.
This very much reminds me of the quote attributed to Stalin:
"One death is a tragedy; one million is a statistic"
I find solace that while many fellow citizens do not invest much time in educating themselves, a few insights have gained recognition through often famous (or infamous) quotes and little nuggets of advice.
Prototypes possess inherent limitations in terms of their physical attributes. Nevertheless, one can attain a sense of moral fulfillment by conceiving an improved version of oneself, which represents a non-physical characteristic of a superior self-prototype. It remains uncertain whether non-physical attributes, such as an enhanced version of oneself, share the same upper boundaries as physical qualities, like a specific number of birds. Consequently, does scope neglect genuinely occur if the scope in question influences a non-physical aspect of a prototype? Is the scalability of a prototype restricted when the scope impacts a non-physical characteristic?
The usual finding is that exponential increases in scope create linear increases in willingness-to-pay
interesting
I wonder whether a cost judgement plays a part in these examples. Saving 2000 birds will have low effort cost and the risk of failure is less significant - trending towards 'there's nothing to lose'. Meanwhile, and attempt to save 200,000 birds (on its face) appears to entail a higher effort costs and the repercussions of failure would be more severe. Where faced with a situation where the default state is a negative outcome, people are often reluctant to invest their resources.
I understand action wise, it might be good collectively; but I also understand for victims of certain crimes for example, it is very hard to tell them hey what you feel about the crime is not rational, and please donate to something else
Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [1]. This is scope insensitivity or scope neglect: the number of birds saved - the scope of the altruistic action - had little effect on willingness to pay.
Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario [2], or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area [3].
People visualize "a single exhausted bird, its feathers soaked in black oil, unable to escape" [4]. This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay - and the image is the same in all cases. As for scope, it gets tossed out the window - no human can visualize 2000 birds at once, let alone 200000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay - perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as "valuation by prototype".
An alternative hypothesis is "purchase of moral satisfaction". People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.
We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1000 - a factor of 600 - increased willingness-to-pay from $3.78 to $15.23 [5]. Baron and Greene found no effect from varying lives saved by a factor of 10 [6].
A paper entitled Insensitivity to the value of human life: A study of psychophysical numbing collected evidence that our perception of human deaths follows Weber's Law - obeys a logarithmic scale where the "just noticeable difference" is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year. [7]
The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.
[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.
[2] Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.
[3] McFadden, D. and Leonard, G. 1995. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.
[4] Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.
[5] Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.
[6] Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.
[7] Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.