This post has clarified something really important for me: why I've had a lot of trouble being motivated to expand my business.
When I work with individual people, I'm motivated to help them. But when I think about the broader concept of "helping people", it feels like something I should care about, but don't. So, this article made me realize that this isn't something that's wrong with me, it's just normal. (And presumably, it means that when other people talk about how they care about people and their mission, they're probably thinking of some specific people somewhere in there!)
When I think back to when I've been more motivated by my work, it's been when I've had specific exemplars that I've thought about. Like, when I was a programmer, I always knew at least some of my software's users, at the very least as people on the other end of a phone call or email conversation, on up to seeing some of them on a regular basis.
I don't have the same frequency of contact any more in my business, and in recent years I've had the challenge that the people I mainly interact with are people who've already been working with me for some time -- which means they no longer have the same...
Why Don't People Help Others More?
Why on earth would they? Evolving this degree of altruism was impressive enough!
I am confused by the concept here of "Opt-Out Philanthropy" having the example of Bear Sterns. What Bear Sterns did was to require a 4% donation and proof of same. Where is the opt out option, other than quitting?
Alternate frame: Bear Sterns contributed 5% of profits to charity, and directed it proportional to people's salaries, then paid costs to document this in a strange way in order to signal.
How statistics can move us: Not very many people have good "translators" to transform left-brain information like words, statistics and factual data into the right-brain format of imagery, which triggers emotions. We question our ethics when we realize that "one death is a tragedy, one million deaths is a statistic." but, to me, this is a design challenge, we simply have not communicated effectively to the right half of the brain. I learned to do this on myself, and it allows me to do really neat things like change my emotional reactions so that I respond differently to deliberate improvements that I make to my ideas. You cannot simply make a person emotional and present them with a statistic. That's a non sequitur to the right brain. You need to communicate the statistic using specific visualizations.
This is the type of imagery I'm thinking might work: Tell the story of Rokia while showing her face, and then zoom out. Imagine one of those Final Fantasy cut scenes that's totally seamless, to keep them connected to their feelings about Rokia while the scope increases. You zoom out to show her village. A village is a lot to visually process, so the des...
With only one student taking the survey, 70% of them stopped what they were doing and offered assistance. However, when there were two students taking the survey, this number dropped down dramatically. Most noticeably, when the group was two students -- but one of the students was a stooge who was in on it and would always not respond, the response rate of the non-stooge participant was only 7%.
Somebody probably broke their leg next door behind just a curtain, and only 70% of the study subjects would go help? And only 7% would help if another person is in the room and the other person doesn't go? Is anyone else very surprised by how low these numbers are? I would have expected something like 95% and 50%.
I am surprised, but unsurprised by my surprise.
That is, I've come to expect that the results of these sorts of studies will consistently find that the "do the right thing" rates in the population are lower than I would naively expect them to be. That hasn't really altered my intuitive sense that of course most people will "do the right thing," although it has severely lowered my confidence in that intuition.
A psych teacher I had in college told the story of introducing a class decades earlier to the Milgram experiment, and (as he always did) asking for a show of hands of how many people in the room thought they would go all the way into the danger zone. Usually he got no hands; this year he got one hand. So after class he asked the guy (an older student) why he'd raised his hand, and the student explained that he was a war veteran, and knew perfectly well how easy it was to get him to do things that "nobody would ever do!"
The funny thing about Milgram is that the people who trusted the scientist and obeyed were factually right even if it didn't feel that way to them at the time. No experimental subject was being hurt.
Of course, by the same token they were factually wrong: no experimental subject was learning to memorize a list of words.
It's good to have reminders of these effects, because they are large and important for anyone hoping to impact behavior, altruistically or otherwise.
Rather than see it as a mystery of "why don't we help each other more?" I think it's better to think of this as "how do we get people to help each other more, and more usefully?" or in two parts: "How do we get people to do things?" and "What things should we get people to do?" None of this seems intrinsically linked to altruism as such.
The starting point of your subject is the question: "Why should people help others?"
Once you have answered this, we can move on with the discussion.
From what I've seen the charities that exploit a personal connection to needy individuals (usually "starving" children) to collect donations are routinely among the least efficient and apparently many are essentially conduits for overvalued tax write-offs. The fraudsters have clearly identified what kind of advertising works best to solicit donations, so why don't good charities exploit the same advertising? For example the red cross website should be plastered with personal stories of children who are currently living in red cross shelters or otherwise benefiting from donations.
The fraudsters have clearly identified what kind of advertising works best to solicit donations, so why don't good charities exploit the same advertising?
Cynical explanation: because advertising that works is low-prestige. (i.e., it does nothing to build the image of the organization, because it's not a costly signal.)
Note that this applies to (theoretically) profit-oriented businesses as well: it is a tired trope among direct marketing professionals that nearly every businessperson cares more about how the ad reflects on him/her self than how it affects his/her profits.
Actually, this effect can also be seen in the average small business website design: the uneducated businessperson chooses a site design that is personally pleasing and which he/she perceives to signal status, over a site design favoring customer preferences or ease of use.
I'm not saying anybody consciously does this; it's just that most people lack sufficient reflectiveness to even consider anything other than their gut reactions to these things, and their gut reactions respond positively to personal prestige and signalling. To even try to take somebody else's point of view is an immense leap of cognitive effort that our brains usually try to avoid as much as humanly possible. ;-)
To this effect, they required the top 1000 most highest paid employees to donate 4% of their salary and bonuses to non-profits, and prove it with their tax returns.
How is that a donation? Why do they get a tax write off for that?
A donation is when you freely give money. If you're sending money as a requirement to work, it's just an odd pay structure.
It's clear that people value fairness, even to their own detriment. In a game called "the Ultimatum Game", one participant is given a sum of money by the researcher, say $10, and told they can split this money with an anonymous second player in any proportion they choose
The catch here is that it is free money. If the participants had to work for it, even a little bit, I bet that the sharing level would drop significantly, to the lowest level they think they can get away with, whatever it might be.
The affective/deliberative dichotomy could be the same as near/far from construal level theory. Hypotheses:
Giving someone a number of people that's been affected, like 3 million, triggers abstract processing and causes those people to be processed as socially distant. (Especially a large number--a small number, like 1, could cause you to start wondering about that particular person, causing them to be processed as socially close.)
...Indeed, the person doesn't even need to be particularly identified, though it does help. In another experiment, people asked
I think there should be some use of the "moral sphere" model in understanding the dilemma presented. The moral sphere is conceptually easy to understand - each person extends moral consideration varying from the center, oneself, outward into society(or world in whole) until a boundary of moral exclusion is reached, and beyond this boundary exist 'them'. The model would thus have Buddha being an idealized moral example having no boundary of exclusion and no decrease in moral consideration from self to the rest of the world.
The next considerati...
As Peter Singer writes in his book The Life You Can Save: "[t]he world would be a much simpler place if one could bring about social change merely by making a logically consistent moral argument". Many people one encounters might agree that a social change movement is noble yet not want to do anything to promote it, or want to give more money to a charity yet refrain from doing so. Additional moralizing doesn't seem to do the trick. ...So what does?
Motivating people to altruism is relevant for the optimal philanthropy movement. For a start on the answer, like many things, I turn to psychology. Specifically, the psychology Peter Singer catalogues in his book.
A Single, Identifiable Victim
One of the most well-known motivations behind helping others is a personal connection, which triggers empathy. When psychologists researching generosity paid participants to join a psychological experiment and then later gave these participants the opportunity to donate to a global poverty fighting organization Save the Children, two different kinds of information were given.
One random group of participants were told "Food shortages in Malawi are affecting more than three million children" and some additional information about how the need for donations was very strong, and these donations could help stop the food shortages.
Another random group of participants were instead shown the photo of Rokia, a seven-year-old Malawian girl who is desperately poor. The participants were told that "her life will be changed for the better by your gift".
Furthermore, a third random group of participants were shown the photo of Rokia, told about who she is and that "her life will be changed for the better", but ALSO told about the general information about the famine and told the same "food shortages [...] are affecting more than three million" -- a combination of both the previous groups.
Lastly, a fourth random group was shown the photo of Rokia, informed about her the same as the other groups, and then given information about another child, identified by name, and told that their donation would also affect this child too for the better.
It's All About the Person
Interestingly, the group who was told ONLY about Rokia gave the most money. The group who was told about both children reported feeling less overal emotion than those who only saw Rokia, and gave less money. The group who was told about both Rokia and the general famine information gave even less than that, followed by the group that only got the general famine information.1,2 It turns out that information about a single person was the most salient for creating an empathetic response to trigger a willingness to donate.1,2
This continues through additional studies. In another generosity experiment, one group of people was told that a single child needed a lifesaving medical treatment that costs $300K, and was given the opportunity to contribute towards this fund. A second random group of people was told that eight children needed a lifesaving treatment, and all of them would die unless $300K could be provided, and was given an opportunity to contribute. More people opted to donate toward the single child.3,4
This is the basis for why we're so willing to chase after lost miners or Baby Jessica no matter the monetary cost, but turn a blind eye to the mass unknown starving in the developing world. Indeed, the person doesn't even need to be particularly identified, though it does help. In another experiment, people asked by researchers to make a donation to Habitat for Humanity were more likely to do so if they were told that the family "has been selected" rather than that they "will be selected" -- even though all other parts of the pitch were the same, and the participants got no information about who the families actually were5.
The Deliberative and The Affective
Why is this the case? Researcher Paul Slovic thinks that humans have two different processes for deciding what to do. The first is an affective system that responds to emotion, rapidly processing images and stories and generating an intuitive feeling that leads to immediate action. The second is a deliberative system that draws on reasoning, and operates on words, numbers, and abstractions, which is much slower to generate action.6
To follow up, the Rokia experiment was done again, except yet another twist was added -- there were two groups, one told only about Rokia exactly as before, and one told only the generic famine information exactly as before. Within each group, half the group took a survey designed to arouse their emotions by asking them things like "When you hear the word 'baby' how do you feel?" The other half of both groups was given emotionally neutral questions, like math puzzles.
This time, the Rokia group gave far more, but those in the group who randomly had their emotions aroused gave even more than those who heard about Rokia but had finished math problems. On the other side, those who heard the generic famine information showed no increase in donation regardless of how heightened their emotions were.1
Futility and Making a Difference
Imagine you're told that there are 3000 refugees at risk in a camp in Rwanda, and you could donate towards aid that would save 1500 of them. Would you do it? And how much would you donate?
Now this time imagine that you can still save 1500 refugees with the same amount of money, but the camp has 10000 refugees. In an experiment where these two scenarios were presented not as a thought experiment but as realities to two separate random groups, the group that heard of only 3000 refugees were more likely to donate, and donated larger amounts.7,8
Enter another quirk of our giving psychology, right or wrong: futility thinking. We think that if we're not making a sizable difference, it's not worth making the difference at all -- it will only be a drop in the ocean and the problem will keep raging on.
Am I Responsible?
People are also far less likely to help if they're with other people. In this experiment, students were invited to participate in a market research survey. However, when the researcher gave the students their questionnaire to fill out, she went into a back room separated from the office only by a curtain. A few minutes later, noises strongly suggested that she had got on a chair to get something from a high shelf, and then fell off it, loudly complaining that she couldn't feel or move her foot.
With only one student taking the survey, 70% of them stopped what they were doing and offered assistance. However, when there were two students taking the survey, this number dropped down dramatically. Most noticeably, when the group was two students -- but one of the students was a stooge who was in on it and would always not respond, the response rate of the non-stooge participant was only 7%.9
This one is known as diffusion of responsibility, better known as the bystander effect -- we help more often when we think it is our responsibility to do so, and -- again for right or for wrong -- we naturally look to others to see if they're helping before doing so ourselves.
What's Fair In Help?
It's clear that people value fairness, even to their own detriment. In a game called "the Ultimatum Game", one participant is given a sum of money by the researcher, say $10, and told they can split this money with an anonymous second player in any proportion they choose -- give them $10, give them $7, give them $5, give them nothing, everything is fair game. The catch is, however, the second player, after hearing of the split anonymously, gets to vote to accept it or reject it. Should the split be accepted, both players walk away with the agreed amount. But should the split be rejected, both players walk away with nothing.
A Fair Split
The economist, expecting ideally rational and perfectly self-interested players, predicts that the second player would accept any split that gets them money, since anything is better than nothing. And the first player, understanding this, would naturally offer $1 and keep $9 for himself. At no point are identities revealed, so reputation and retribution are no issue.
But the results turn out to be quite different -- the vast majority offer an equal split. Yet, when an offer comes around that offers $2 or less, it is almost always rejected, even though $2 is better than nothing.10 And this effect persists even when played for thousands of dollars and persists across nearly all cultures.
Splitting and Anchoring in Charity
This sense of fairness persists into helping as well -- people generally have a strong tendency not to want to help more than the other people around them, and if they find themselves the only ones helping on a frequent basis, they start to feel a "sucker". On the flipside, if others are doing more, they will follow suit.11,12,13
Those told the average donation to a charity nearly always tend to give that amount, even if the average told to them is a lie, having secretly been increased or decreased. And it can be replicated even without lying -- those told about an above average gift were far more likely to donate more, even attempting to match that gift.14,15 Overall, we tend to match the behavior of our reference class -- those people we identify with -- and this includes how much we help. We donate more when we believe others are donating more, and donate less when we believe others are doing so.
Challenging the Self-Interest Norm
But there's a way to break this cycle of futility, responsibility, and fairness -- challenge the norm by openly communicating about helping others. While many religious and secular values insist that the best giving is anonymous giving, this turns out to not always be the case. While there may be other reasons to give anonymously, don't forget the benefits of giving openly -- being open about helping inspires others to help, and can help challenge the norms of the culture.
Indeed, many organizations now exist to help challenge the norms of donations and try to create a culture where they give more. GivingWhatWeCan is a community of 230 people (including me!) who have all pledged to donate at least 10% of their income to organizations working on ending extreme poverty, and submit statements proving so. BolderGiving has a bunch of inspiring stories of over 100 people who all give at least 20% of their income, with a dozen giving over 90%! And these aren't all rich people, some of them are even ordinary students.
Who's Willing to Be Altruistic?
While people are not saints, experiments have shown that people tend to grossly overestimate how self-interested other people are -- for one example, people estimated that males would overwhelmingly favor a piece of legislation to "slash research funding to a disease that affects only women", even while -- being male -- they themselves do not support such legislation.16
This also manifests itself in an expectation that people be "self-interested" in their philanthropic cause -- suggesting much stronger support for volunteers in Students Against Drunk Driving who themselves knew people killed in drunk driving accidents versus those people who had no such personal experiences but just thought it to be "a very important cause".17
Alex de Tocqueville, echoing the early economists who expected $9/$1 splits in the Ultimatum Game, wrote in 1835 that "Americans enjoy explaining almost every act of their lives on the principle of self-interest".18 But this isn't always the case, and in challenging the norm, people make it more acceptable to be altruistic. It's not just "goody two-shoes", and it's praiseworthy to be "too charitable".
A Bit of a Nudge
A somewhat pressing problem in getting people to help was in organ donation -- surely no one was inconvenienced by having their organs donated after they had died. Yet, why would people not sign up? And how could we get more people to sign up?
In Germany, only 12% of the population are registered organ donors. In nearby Austria, that number is 99.98%. Are people in Austria just less worried about what will happen to them after they die, or just that more altruistic? It turns out the answer is far more simple -- in Germany you must put yourself on the register to become a potential donor (opt-in), whereas in Austria you are a potential donor unless you object (opt-out). While people may be, for right or for wrong, worried about the fate of their body after it is dead, they appear less likely to express these reservations in opt-out systems.19
While Richard Thaler and Cass Sunstein argue in their book Nudge: Improving Decisions About Health, Wellness, and Happiness that we sometimes suck at making decisions in our own interest and all could do better with more favorable "defaults", such defaults are also pressing in helping people.
While opt-out organ donation is a huge deal, there's another similar idea -- opt-out philanthropy. Back before 2008 when the investment bank Bear Stearns still existed, Bear Stearns listed their guiding principle as philanthropy as fostering good citizenship and well-rounded individuals. To this effect, they required the top 1000 most highest paid employees to donate 4% of their salary and bonuses to non-profits, and prove it with their tax returns. This resulted in more than $45 million in donations during 2006. Many employees described the requirement as "getting themselves to do what they wanted to do anyway".
Conclusions
So, according to this bit of psychology, what could we do to get other people to help more, besides moralize? Well, we have five key take-aways:
(1) present these people with a single and highly identifiable victim that they can help
(2) nudge them with a default of opt-out philanthropy
(3) be more open about our willingness to be altruistic and encourage other people to help
(4) make sure people understand the average level of helping around them, and
(5) instill a responsibility to help and an understanding that doing so is not futile.
Hopefully, with these tips and more, helping people more can be come just one of those things we do.
References
(Note: Links are to PDF files.)
1: D. A. Small, G. Loewenstein, and P. Slovic. 2007. "Sympathy and Callousness: The Impact of Deliberative Thought on Donations to Identifiable and Statistical Victims". Organizational Behavior and Human Decision Processes 102: p143-53
2: Paul Slovic. 2007. "If I Look at the Mass I Will Never Act: Psychic Numbing and Genocide". Judgment and Decision Making 2(2): p79-95.
3: T. Kogut and I. Ritov. 2005. "The 'Identified Victim' Effect: An Identified Group, or Just a Single Individual?". Journal of Behavioral Decision Making 18: p157-67.
4: T. Kogut and I. Ritov. 2005. "The Singularity of Identified Victims in Separate and Joint Evaluations". Organizational Behavior and Human Decision Processes 97: p106-116.
5: D. A. Small and G. Lowenstein. 2003. "Helping the Victim or Helping a Victim: Altruism and Identifiability". Journal of Risk and Uncertainty 26(1): p5-16.
6: Singer cites this from Paul Slovic, who in turn cites it from: Seymour Epstein. 1994. "Integration of the Cognitive and the Psychodynamic Unconscious". American Psychologist 49: p709-24. Slovic refers to the affective system as "experiential" and the deliberative system as "analytic". This is also related to Daniel Kahneman's popular book Thinking Fast and Slow.
7: D. Fetherstonhaugh, P. Slovic, S. M. Johnson, and J. Friedrich. 1997. "Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing". Journal of Risk and Uncertainty 14: p283-300.
8: Daniel Kahneman and Amos Tversky. 1979. "Prospect Theory: An Analysis of Decision Under Risk." Econometrica 47: p263-91.
9: Bib Lantané and John Darley. 1970. The Unresponsive Bystander: Why Doesn't He Help?. New York: Appleton-Century-Crofts, p58.
10: Martin Nowak, Karen Page, and Karl Sigmund. 2000. "Fairness Versus Reason in the Ultimatum Game". Science 289: p1183-75.
11: Lee Ross and Richard E. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. Philadelphia: Temple University Press, p27-46.
12: Robert Cialdini. 2001. Influence: Science and Practice, 4th Edition. Boston: Allyn and Bacon.
13: Judith Lichtenberg. 2004. "Absence and the Unfond Heart: Why People Are Less Giving Than They Might Be". in Deen Chatterjee, ed. The Ethics of Assistance: Morality and the Distant Needy. Cambridge, UK: Cambridge University Press.
14: Jen Shang and Rachel Croson. Forthcoming. "Field Experiments in Charitable Contribution: The Impact of Social Influence on the Voluntary Provision of Public Goods". The Economic Journal.
15: Rachel Croson and Jen Shang. 2008. "The Impact of Downward Social Information on Contribution Decision". Experimental Economics 11: p221-33.
16: Dale Miller. 199. "The Norm of Self-Interest". American Psychologist 54: 1053-60.
17: Rebecca Ratner and Jennifer Clarke. Unpublished. "Negativity Conveyed to Social Actors Who Lack a Personal Connection to the Cause".
18: Alexis de Tocqueville in J.P. Mayer ed., G. Lawrence, trans. 1969. Democracy in America. Garden City, N.Y.: Anchor, p546.
19: Eric Johnson and Daniel Goldstein. 2003. "Do Defaults Save Lives?". Science 302: p1338-39.
(This is an updated version of an earlier draft from my blog.)