Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
As Peter Singer writes in his book The Life You Can Save: "[t]he world would be a much simpler place if one could bring about social change merely by making a logically consistent moral argument". Many people one encounters might agree that a social change movement is noble yet not want to do anything to promote it, or want to give more money to a charity yet refrain from doing so. Additional moralizing doesn't seem to do the trick. ...So what does?
Motivating people to altruism is relevant for the optimal philanthropy movement. For a start on the answer, like many things, I turn to psychology. Specifically, the psychology Peter Singer catalogues in his book.
A Single, Identifiable Victim
One of the most well-known motivations behind helping others is a personal connection, which triggers empathy. When psychologists researching generosity paid participants to join a psychological experiment and then later gave these participants the opportunity to donate to a global poverty fighting organization Save the Children, two different kinds of information were given.
One random group of participants were told "Food shortages in Malawi are affecting more than three million children" and some additional information about how the need for donations was very strong, and these donations could help stop the food shortages.
Another random group of participants were instead shown the photo of Rokia, a seven-year-old Malawian girl who is desperately poor. The participants were told that "her life will be changed for the better by your gift".
Furthermore, a third random group of participants were shown the photo of Rokia, told about who she is and that "her life will be changed for the better", but ALSO told about the general information about the famine and told the same "food shortages [...] are affecting more than three million" -- a combination of both the previous groups.
Lastly, a fourth random group was shown the photo of Rokia, informed about her the same as the other groups, and then given information about another child, identified by name, and told that their donation would also affect this child too for the better.
It's All About the Person
Interestingly, the group who was told ONLY about Rokia gave the most money. The group who was told about both children reported feeling less overal emotion than those who only saw Rokia, and gave less money. The group who was told about both Rokia and the general famine information gave even less than that, followed by the group that only got the general famine information.1,2 It turns out that information about a single person was the most salient for creating an empathetic response to trigger a willingness to donate.1,2
This continues through additional studies. In another generosity experiment, one group of people was told that a single child needed a lifesaving medical treatment that costs $300K, and was given the opportunity to contribute towards this fund. A second random group of people was told that eight children needed a lifesaving treatment, and all of them would die unless $300K could be provided, and was given an opportunity to contribute. More people opted to donate toward the single child.3,4
This is the basis for why we're so willing to chase after lost miners or Baby Jessica no matter the monetary cost, but turn a blind eye to the mass unknown starving in the developing world. Indeed, the person doesn't even need to be particularly identified, though it does help. In another experiment, people asked by researchers to make a donation to Habitat for Humanity were more likely to do so if they were told that the family "has been selected" rather than that they "will be selected" -- even though all other parts of the pitch were the same, and the participants got no information about who the families actually were5.
The Deliberative and The Affective
Why is this the case? Researcher Paul Slovic thinks that humans have two different processes for deciding what to do. The first is an affective system that responds to emotion, rapidly processing images and stories and generating an intuitive feeling that leads to immediate action. The second is a deliberative system that draws on reasoning, and operates on words, numbers, and abstractions, which is much slower to generate action.6
To follow up, the Rokia experiment was done again, except yet another twist was added -- there were two groups, one told only about Rokia exactly as before, and one told only the generic famine information exactly as before. Within each group, half the group took a survey designed to arouse their emotions by asking them things like "When you hear the word 'baby' how do you feel?" The other half of both groups was given emotionally neutral questions, like math puzzles.
This time, the Rokia group gave far more, but those in the group who randomly had their emotions aroused gave even more than those who heard about Rokia but had finished math problems. On the other side, those who heard the generic famine information showed no increase in donation regardless of how heightened their emotions were.1
Futility and Making a Difference
Imagine you're told that there are 3000 refugees at risk in a camp in Rwanda, and you could donate towards aid that would save 1500 of them. Would you do it? And how much would you donate?
Now this time imagine that you can still save 1500 refugees with the same amount of money, but the camp has 10000 refugees. In an experiment where these two scenarios were presented not as a thought experiment but as realities to two separate random groups, the group that heard of only 3000 refugees were more likely to donate, and donated larger amounts.7,8
Enter another quirk of our giving psychology, right or wrong: futility thinking. We think that if we're not making a sizable difference, it's not worth making the difference at all -- it will only be a drop in the ocean and the problem will keep raging on.
Am I Responsible?
People are also far less likely to help if they're with other people. In this experiment, students were invited to participate in a market research survey. However, when the researcher gave the students their questionnaire to fill out, she went into a back room separated from the office only by a curtain. A few minutes later, noises strongly suggested that she had got on a chair to get something from a high shelf, and then fell off it, loudly complaining that she couldn't feel or move her foot.
With only one student taking the survey, 70% of them stopped what they were doing and offered assistance. However, when there were two students taking the survey, this number dropped down dramatically. Most noticeably, when the group was two students -- but one of the students was a stooge who was in on it and would always not respond, the response rate of the non-stooge participant was only 7%.9
This one is known as diffusion of responsibility, better known as the bystander effect -- we help more often when we think it is our responsibility to do so, and -- again for right or for wrong -- we naturally look to others to see if they're helping before doing so ourselves.
What's Fair In Help?
It's clear that people value fairness, even to their own detriment. In a game called "the Ultimatum Game", one participant is given a sum of money by the researcher, say $10, and told they can split this money with an anonymous second player in any proportion they choose -- give them $10, give them $7, give them $5, give them nothing, everything is fair game. The catch is, however, the second player, after hearing of the split anonymously, gets to vote to accept it or reject it. Should the split be accepted, both players walk away with the agreed amount. But should the split be rejected, both players walk away with nothing.
A Fair Split
The economist, expecting ideally rational and perfectly self-interested players, predicts that the second player would accept any split that gets them money, since anything is better than nothing. And the first player, understanding this, would naturally offer $1 and keep $9 for himself. At no point are identities revealed, so reputation and retribution are no issue.
But the results turn out to be quite different -- the vast majority offer an equal split. Yet, when an offer comes around that offers $2 or less, it is almost always rejected, even though $2 is better than nothing.10 And this effect persists even when played for thousands of dollars and persists across nearly all cultures.
Splitting and Anchoring in Charity
This sense of fairness persists into helping as well -- people generally have a strong tendency not to want to help more than the other people around them, and if they find themselves the only ones helping on a frequent basis, they start to feel a "sucker". On the flipside, if others are doing more, they will follow suit.11,12,13
Those told the average donation to a charity nearly always tend to give that amount, even if the average told to them is a lie, having secretly been increased or decreased. And it can be replicated even without lying -- those told about an above average gift were far more likely to donate more, even attempting to match that gift.14,15 Overall, we tend to match the behavior of our reference class -- those people we identify with -- and this includes how much we help. We donate more when we believe others are donating more, and donate less when we believe others are doing so.
Challenging the Self-Interest Norm
But there's a way to break this cycle of futility, responsibility, and fairness -- challenge the norm by openly communicating about helping others. While many religious and secular values insist that the best giving is anonymous giving, this turns out to not always be the case. While there may be other reasons to give anonymously, don't forget the benefits of giving openly -- being open about helping inspires others to help, and can help challenge the norms of the culture.
Indeed, many organizations now exist to help challenge the norms of donations and try to create a culture where they give more. GivingWhatWeCan is a community of 230 people (including me!) who have all pledged to donate at least 10% of their income to organizations working on ending extreme poverty, and submit statements proving so. BolderGiving has a bunch of inspiring stories of over 100 people who all give at least 20% of their income, with a dozen giving over 90%! And these aren't all rich people, some of them are even ordinary students.
Who's Willing to Be Altruistic?
While people are not saints, experiments have shown that people tend to grossly overestimate how self-interested other people are -- for one example, people estimated that males would overwhelmingly favor a piece of legislation to "slash research funding to a disease that affects only women", even while -- being male -- they themselves do not support such legislation.16
This also manifests itself in an expectation that people be "self-interested" in their philanthropic cause -- suggesting much stronger support for volunteers in Students Against Drunk Driving who themselves knew people killed in drunk driving accidents versus those people who had no such personal experiences but just thought it to be "a very important cause".17
Alex de Tocqueville, echoing the early economists who expected $9/$1 splits in the Ultimatum Game, wrote in 1835 that "Americans enjoy explaining almost every act of their lives on the principle of self-interest".18 But this isn't always the case, and in challenging the norm, people make it more acceptable to be altruistic. It's not just "goody two-shoes", and it's praiseworthy to be "too charitable".
A Bit of a Nudge
A somewhat pressing problem in getting people to help was in organ donation -- surely no one was inconvenienced by having their organs donated after they had died. Yet, why would people not sign up? And how could we get more people to sign up?
In Germany, only 12% of the population are registered organ donors. In nearby Austria, that number is 99.98%. Are people in Austria just less worried about what will happen to them after they die, or just that more altruistic? It turns out the answer is far more simple -- in Germany you must put yourself on the register to become a potential donor (opt-in), whereas in Austria you are a potential donor unless you object (opt-out). While people may be, for right or for wrong, worried about the fate of their body after it is dead, they appear less likely to express these reservations in opt-out systems.19
While Richard Thaler and Cass Sunstein argue in their book Nudge: Improving Decisions About Health, Wellness, and Happiness that we sometimes suck at making decisions in our own interest and all could do better with more favorable "defaults", such defaults are also pressing in helping people.
While opt-out organ donation is a huge deal, there's another similar idea -- opt-out philanthropy. Back before 2008 when the investment bank Bear Stearns still existed, Bear Stearns listed their guiding principle as philanthropy as fostering good citizenship and well-rounded individuals. To this effect, they required the top 1000 most highest paid employees to donate 4% of their salary and bonuses to non-profits, and prove it with their tax returns. This resulted in more than $45 million in donations during 2006. Many employees described the requirement as "getting themselves to do what they wanted to do anyway".
So, according to this bit of psychology, what could we do to get other people to help more, besides moralize? Well, we have five key take-aways:
(1) present these people with a single and highly identifiable victim that they can help
(2) nudge them with a default of opt-out philanthropy
(3) be more open about our willingness to be altruistic and encourage other people to help
(4) make sure people understand the average level of helping around them, and
(5) instill a responsibility to help and an understanding that doing so is not futile.
Hopefully, with these tips and more, helping people more can be come just one of those things we do.
(Note: Links are to PDF files.)
1: D. A. Small, G. Loewenstein, and P. Slovic. 2007. "Sympathy and Callousness: The Impact of Deliberative Thought on Donations to Identifiable and Statistical Victims". Organizational Behavior and Human Decision Processes 102: p143-53
2: Paul Slovic. 2007. "If I Look at the Mass I Will Never Act: Psychic Numbing and Genocide". Judgment and Decision Making 2(2): p79-95.
3: T. Kogut and I. Ritov. 2005. "The 'Identified Victim' Effect: An Identified Group, or Just a Single Individual?". Journal of Behavioral Decision Making 18: p157-67.
4: T. Kogut and I. Ritov. 2005. "The Singularity of Identified Victims in Separate and Joint Evaluations". Organizational Behavior and Human Decision Processes 97: p106-116.
5: D. A. Small and G. Lowenstein. 2003. "Helping the Victim or Helping a Victim: Altruism and Identifiability". Journal of Risk and Uncertainty 26(1): p5-16.
6: Singer cites this from Paul Slovic, who in turn cites it from: Seymour Epstein. 1994. "Integration of the Cognitive and the Psychodynamic Unconscious". American Psychologist 49: p709-24. Slovic refers to the affective system as "experiential" and the deliberative system as "analytic". This is also related to Daniel Kahneman's popular book Thinking Fast and Slow.
7: D. Fetherstonhaugh, P. Slovic, S. M. Johnson, and J. Friedrich. 1997. "Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing". Journal of Risk and Uncertainty 14: p283-300.
8: Daniel Kahneman and Amos Tversky. 1979. "Prospect Theory: An Analysis of Decision Under Risk." Econometrica 47: p263-91.
9: Bib Lantané and John Darley. 1970. The Unresponsive Bystander: Why Doesn't He Help?. New York: Appleton-Century-Crofts, p58.
10: Martin Nowak, Karen Page, and Karl Sigmund. 2000. "Fairness Versus Reason in the Ultimatum Game". Science 289: p1183-75.
11: Lee Ross and Richard E. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. Philadelphia: Temple University Press, p27-46.
12: Robert Cialdini. 2001. Influence: Science and Practice, 4th Edition. Boston: Allyn and Bacon.
13: Judith Lichtenberg. 2004. "Absence and the Unfond Heart: Why People Are Less Giving Than They Might Be". in Deen Chatterjee, ed. The Ethics of Assistance: Morality and the Distant Needy. Cambridge, UK: Cambridge University Press.
14: Jen Shang and Rachel Croson. Forthcoming. "Field Experiments in Charitable Contribution: The Impact of Social Influence on the Voluntary Provision of Public Goods". The Economic Journal.
15: Rachel Croson and Jen Shang. 2008. "The Impact of Downward Social Information on Contribution Decision". Experimental Economics 11: p221-33.
16: Dale Miller. 199. "The Norm of Self-Interest". American Psychologist 54: 1053-60.
17: Rebecca Ratner and Jennifer Clarke. Unpublished. "Negativity Conveyed to Social Actors Who Lack a Personal Connection to the Cause".
18: Alexis de Tocqueville in J.P. Mayer ed., G. Lawrence, trans. 1969. Democracy in America. Garden City, N.Y.: Anchor, p546.
19: Eric Johnson and Daniel Goldstein. 2003. "Do Defaults Save Lives?". Science 302: p1338-39.
(This is an updated version of an earlier draft from my blog.)
Here on LW, we know that if you want to do the most good, you shouldn't diversify your charitable giving. If a specific charity makes the best use of your money, then you should assign your whole charitable budget to that organization. In the unlikely case that you're a millionaire and the recipient couldn't make full use of all your donations, then sure, diversify. But most people couldn't donate that much even if they wanted to. Also, if you're trying to buy yourself a warm fuzzy feeling, diversification will help. But then you're not trying to do the most good, you're trying to make yourself feel good, and you'd do well to have separate budgets for those two.
We also know about scope insensitivity - when three groups of subjects were asked how much they'd pay to save 2000 / 20000 / 200000 migrating birds from drowning in oil, they answered $80, $78, and $88, respectively. "How much do I value it if 20,000 birds are saved from drowning in oil" is a hard question, and we're unsure of what to compare it with. So we substitute the question into an easier and clearer one - "how much emotion do I feel when I think about birds drowning in oil". And that question doesn't take the number of birds into account, so the number gets mostly ignored.
So diversification and scope insensitivity are two biases that people have, and which affect charitable giving. What others are there?
According to Baron & Szymanska (2010), there are a number of heuristics involved in giving that lead to various biases. Diversification we are already familiar with. The others are Evaluability, Average vs. Marginal Benefit, Prominence, Identifiability, and Voluntary vs. Tax.
This topic is not really related to the things normally discussed here, but I think it's really important, and it might interest Less Wrongers, especially since many of us are interested in ethics and utility calculations that are essentially cost-benefit analyses. Bone marrow donation in the United States is managed by the National Marrow Donor Program. Because typing donors for matching purposes can be costly, they often require people signing up to donate to pay a registration fee, which probably prevents a lot of people from signing up. These costs are being covered until the end of the month by a corporate sponsor, which means that right now, all you need to do if you live in the US is go to http://marrow.org/Join/Join_Now/Join_Now.aspx and fill out a simple questionnaire. You will be sent a kit to collect a cheek swab, and then you will be entered into the donor database. Doing this does not require you to donate if a match comes up.
The reason I think this might interest Less Wrongers is that this is a really cheap way to improve the world. According to their website, about 1 in 500 potential donors are actually asked to donate, so registering doesn't actually make it all that likely that you will be asked to do anything more. If you ARE a match for someone who needs a donation, the cost to you is at most the temporary pain of marrow extraction (many donors are asked only for blood cells), whereas the other person’s chance to live is much improved. This looks like a huge net positive.
Unfortunately I only found out about this a few days ago, and it only occurred to me today that this might be a forum of people who would respond to the argument "you can make the world better at little cost to yourself." However, I ask that you go to the website and spend a few minutes signing up. This is like buying a 1 in 500 lottery ticket that SAVES SOMEONE’S LIFE. If the Singularity hits and an FAI can generate perfectly matched marrow for anyone who needs it from totipotency-induced cells, that will be wonderful, but this is a chance to make sure one more person gets there.
From Mike Darwn's Chronopause, an essay titled "Would You Like Another Plate of This?", discussing people's attitudes to life:
The most important, the most obvious and the most factual reason why cryonics is not more widely accepted is that it fails the “credibility sniff test” in that it makes many critical assumptions which may not be correct...In other words, cryonics is not proven. That is a plenty valid reason for rejecting any costly procedure; dying people do this kind of thing every day for medical procedures which are proven, but which have a very low rate of success and (or) a very high misery quotient. Some (few) people have survived metastatic head/neck cancer – the film critic Roger Ebert, is an example (Figure 1). However, the vast majority of patients who undergo radical neck surgery for cancer die anyway. For the kind and extent of cancer Ebert had, the long term survival rate (>5 years) is ~5% following radical neck dissection and ancillary therapy: usually radiation and chemotherapy. This is thus a proven procedure – it works – and yet the vast majority of patients refuse it.
From the SingInst blog:
Thanks to the generosity of several major donors†, every donation to the Singularity Institute made now until August 31, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
(Visit the challenge page to see a progress bar.)
Now is your chance to double your impact while supporting the Singularity Institute and helping us raise up to $250,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!
† $125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edström, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.
2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Singularity Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine.
The last 1.5 years were our biggest ever. Since the beginning of 2010, we have:
- Held our annual Singularity Summit, in San Francisco. Speakers included Ray Kurzweil, James Randi, Irene Pepperberg, and many others.
- Held the first Singularity Summit Australia and Singularity Summit Salt Lake City.
- Held a wildly successful Rationality Minicamp.
- Published seven research papers, including Yudkowsky’s much-awaited ‘Timeless Decision Theory‘.
- Helped philosopher David Chalmers write his seminal paper ‘The Singularity: A Philosophical Analysis‘, which has sparked broad discussion in academia, including an entire issue of Journal of Consciousness Studies and a book from Springer devoted to responses to Chalmers’ paper.
- Launched the Research Associates program.
- Brought MIT cosmologist Max Tegmark onto our advisory board, published our Singularity FAQ, and much more.
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in New York City this year.
- Publish three chapters in the upcoming academic volume The Singularity Hypothesis, along with several other papers.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems related to Friendly AI, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish well-researched documents making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As PayPal co-founder and Singularity Institute donor Peter Thiel said:
“I’m interested in facilitating a forum in which there can be… substantive research on how to bring about a world in which AI will be friendly to humans rather than hostile… [The Singularity Institute represents] a combination of very talented people with the right problem space [they’re] going after… [They’ve] done a phenomenal job… on a shoestring budget. From my perspective, the key question is always: What’s the amount of leverage you get as an investor? Where can a small amount make a big difference? This is a very leveraged kind of philanthropy.”
Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.
Summary: The psychology of charitable giving offers three pieces of advice to those who want to give charity and those who want to receive it: Enjoy the happiness that giving brings, commit future income, and realize that requesting time increases the odds of getting money.
One Saturday morning in 2009, an unknown couple walked into a diner, ate their breakfast, and paid their tab. They also paid the tab for some strangers at another table.
And for the next five hours, dozens of customers got into the joy of giving and paid the favor forward.
Several studies suggest that giving does bring happiness. One study found that asking people to commit random acts of kindness can increase their happiness for weeks.1 And at the neurological level, giving money to charity activates the reward centers of the brain, the same ones activated by everything from cocaine to great art to an attractive face.2
Another study randomly assigned participants to spend money either on themselves or on others. As predicted, those who spent money helping others were happier at the end of the day.3
Other studies confirm that just as giving brings happiness, happiness brings giving. A 1972 study showed that people are more likely to help others if they have recently been put in a good mood by receiving a cookie or finding a dime left in a payphone.4 People are also more likely to help after they read something pleasant,5 or when they are made to feel competent at something.6
In fact, deriving happiness from giving may be a human universal.7 Data from 136 countries shows that spending money to help others is correlated with happiness.8
But correlation does not imply causation. To test for causation, researchers randomly assigned participants from two very different cultures (Canada and Uganda) to write about a time when they had spent money on themselves (personal spending) or others (prosocial spending). Participants were asked to report the happiness levels before and after the writing exercise. As predicted, those who wrote (and thought) about a time when they had engaged in prosocial spending saw greater increases in happiness than those who wrote about a time when they spent money on themselves.
So does happiness run in a circular motion?
This, too, has been tested. In one study,9 researchers asked each subject to describe the last time they spent either $20 or $100 on themselves or on someone else. Next, researchers had each participant report their level of happiness, and then predict which future spending behavior ($5 or $20, on themselves or others) would make them happiest.
Subjects assigned to recall prosocial spending reported being happier than those assigned to recall personal spending. Moreover, this reported happiness predicted the future spending choice, but neither the purchase amount nor the purchasing target (oneself or others) did. So happiness and giving do seem to reinforce each other.
So, should charities remind people that donating will make them happy?
This, alas, has not been tested. But for now we might guess that just as people generally do things they believe will make them happier, they will probably give more if persuaded by the (ample) evidence that generosity brings happiness.
Lessons for optimal philanthropists: Read the studies showing that giving brings happiness. (Check the footnotes below.) Pick out an optimal charity in advance, notice when you're happy, and decide to give them money right then.
Lessons for optimal charities: Teach your donors how to be happy. Remind them that generosity begets happiness.
12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner.
- I am not affiliated with the Singularity Institute for Artificial Intelligence.
- I have not donated to the SIAI prior to writing this.
- I made this pledge prior to writing this document.
- Images are now hosted on LessWrong.com.
- The 2010 Form 990 data will be available later this month.
- It is not my intent to propagate misinformation. Errors will be corrected as soon as they are identified.
Acting on gwern's suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more.
I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn’t sure. I read gwern’s article and realized that I could easily get more information to clarify my thinking.
So I downloaded the SIAI’s Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it.
My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation.
Singularity Institute is today's featured charity on Philanthroper.com
Philanthroper is a micro-giving site that profiles small charities hand-selected by their editors. Their site encourages donors to “give every day” by only requesting $1 contributions.
A group of Singularity Institute donors has stepped forward to match all donations given through Philanthroper today so I'd encourage each of you to give $1 now if you support Singularity Institute and have a US-based bank account (Philanthroper requirement). We'd like to have a healthy total raised by the end of the day. The fundraiser has already been featured on Gizmodo but please submit it to other news sites if you can.
I signed up and gave my $1.
If I were to take all of my friends and divide them into two groups, there are plenty of criteria I could choose, but probably the most relevant slice would be between my friends who believe in God, and my friends who don’t.
Many in the believer group know each other as well. The evangelical Christian community in my city is fairly tight-knit. Every once in a while I’ll meet someone new, I’ll mention offhand something about church, it’ll become the topic of conversation, and suddenly we discover that we share a dozen mutual friends.
My non-believer friends come from all walks of life. My old friends from high school fit in this category; so do many of the friends I’ve met through university or part-time jobs. There’s no tight-knit community here. I wouldn’t describe many of them as rationalists, particularly, but it seems that according to lesswrong doctrine, they are above the sanity waterline while my first friend group is below.
Something about this bothers me. Maybe it’s because I find it so refreshing to be with a group of people who are relentlessly positive about life, who constantly remind one another to be positive, and who offer concrete help rather than judgement. Once, when another of our friends couldn’t pay her rent, my Christian friend and I got up at four, took out five hundred dollars in cash at a convenience store, and biked to her house to leave it anonymously in her mailbox before I left for my six am shift at work. The high lasted all day. I can’t think of any other community where this would happen, where it would even be socially acceptable.
I met people at church who had survived the worst circumstances; they had been abused, they had been addicts, they had been homeless. But aside from the concrete help they’d found at church, they’d found some kind of hope as well. They believed that they could succeed. I’ve been incredibly lucky in my life, and I’ve never had reason to doubt that I would succeed, or that people would be there to help me if I ever failed. But for people who’ve only seen evidence that they will fail and be stepped on, the benefits of being told that God loves them unconditionally seem to be non-trivial.
Now to contrast with my non-religious friends; this isn’t universally true, but I’ve seen a trend of general negative-ness. This attitude can be self-directed, i.e. complaining about work or school or relationships without any effort to find solutions. I know some very unhappy people, and it seems insane to me that they just sit back and take it, month after month. The negative attitude can also be directed outwards into biting sarcasm and rude, judgemental comments about others. This often comes from people who seem happy enough with their own lives. Maybe I didn’t notice this as much before I started going to church, where it became obvious in its absence.
I have the same tendencies to criticize and judge as anyone, but at least I notice them and try to keep them in check. I try to ask myself if it really helps to criticize someone. Does whatever I think they’re doing wrong really affect me? Is it my business to correct them? Would they listen to criticism? If I’m a reliable example, most people hate being criticized. It takes a conscious effort to step back and see criticism in a positive light. I try to take this step, and maybe most rationalists-in-the-making do the same, but that’s not the general population, and starting with a criticism tends to close people off and put them on the defensive. The last question I ask myself is, do I want to help them by suggesting a change, or do I only want to vent my own frustration? Venting doesn’t help them, and it doesn’t help me, because for me anyway, focusing on the negative side of an issue tends to flip my entire mindset into the negative. And negative attitudes are contagious. If one person at work is ranting about a bad breakup or a fight with their family, I’ll often catch myself brooding about someone or something I’m annoyed with. If I’m lucky and I’m paying attention, I notice the subliminal messaging before it really gets to be. Sometimes I feel like barking “hey, keep your problems to yourself, I’m trying to be positive here.” But again, if I’m paying attention to my own reactions, I ask myself if it’ll really help to snap at them, and the answer is no, so I’ll try to be an understanding listener.
These are things I do consciously, but since I stopped going to church regularly, I’ve noticed that it’s more of an effort. It feels like I’m holding up a heavy weight alone, going through my day talking to roommates and classmates and co-workers who don’t make any special effort to be positive or non-judgemental or helpful. And as soon as I let down my guard, I slip back into the trap of reacting to criticism defensively instead of constructively, of snapping back on reflex, of making excuses for why I was rude to someone or left my dirty dishes in the sink. I hate the way I act in this default mode, but it’s easy to make excuses for that too. I tell myself that I’m tired, that I’m burnt out, that I can’t be everything to everyone. I tell myself it’s not fair that I try so much harder than everyone else.
At church, there was a marked lack of excuses. The general attitude was that you could be as strong as you needed to be, because it wasn’t your strength, it was God’s strength. The way I see it, it was more the combined strength of a community united by a common ideal. It was like a self-help group, but without the stigma. (Maybe the stigma is imaginary; I just know that I have a negative emotional reaction to self-help books and websites. I know this is probably counterproductive, but I can’t seem to get rid of it.)
I talk to some of my friends, the non-religious ones, and I notice that maybe half the time they’re grumpy or upset or angry or offended, and they don’t stop to think about it, or take the step away that would allow them to question and overcome those feelings. My Christian friends aren’t perfect, and they do occasionally slip into anger and frustration, but they often notice. They often bring it up afterwards, in front of the group, as an example of something they need to work on.
This is why, even though I don’t believe in God and would probably be incapable of it at this point, the last thing I want to do is judge people who believe. A lot of the time, they’ve found something that helps them. This is why I found it instrumentally rational, for six months, to go to youth group once a week and sing songs about Jesus. Happiness is a hard thing to pin down, but I liked myself better during that time. It’s easier to be generous when everyone is being generous around you; it’s easier to be kind and helpful when everyone else is acting that way too. It feels like being held accountable.
I don’t really know what this means. It’s hard to generalize, because I’m talking about people in my age group; most of us are poor and not settled in our lives, without firmly developed social networks. Maybe later on in life, people can make their own tight-knit communities without religion as binding glue; my parents, for example, have an incredibly extensive social group. And I certainly don’t want to imply that all Christian organizations are as open and welcoming as the one I attended. I’m sure than plenty of people have had bad experiences. But what I’ve seen suggests to me that my church (a Pentacostal evangelical Christian group, by the way) served a function in our city that wasn’t being filled by anything else.
It’s limited, of course, by the fact that its founders believe the Bible is literally true, even if they don’t apply that belief thoroughly. (This occasionally involves a tricky kind of doublethink, for example a person who denounces homosexuality when asked directly but who holds nothing against their homosexual friends.) Could the principles of rationality prompt a group of people to form this kind of community? I don’t know. But until then, I’m going to keep hanging out with Christians and sharing their positive thoughts.
Steven Landsburg argued, in an oft-quoted article, that the rational way to donate to charity is to give everything to the charity you consider most effective, rather than diversify; and that this is always true when your contribution is much smaller than the charities' endowments. Besides an informal argument, he provided a mathematical addendum for people who aren't intimidated by partial derivatives. This post will bank on your familiarity with both.
I submit that the math is sloppy and the words don't match the math. This isn't to say that the entire thing must be rejected; on the contrary, an improved set of assumptions will fix the math and make the argument whole. Yet it is useful to understand the assumptions better, whether you want to adopt or reject them.
And so, consider the math. We assume that our desire is to maximize some utility function U(X, Y, Z), where X, Y and Z are total endowments of three different charities. It's reasonable to assume U is smooth enough so we can take derivatives and apply basic calculus with impunity. We consider our own contributions Δx, Δy and Δz, and form a linear approximation to the updated value U(X+Δx, Y+Δy, Z+Δz). If this approximation is close enough to the true value, the rest of the argument goes through: given that the sum Δx+Δy+Δz is fixed, it's best to put everything into the charity with the largest partial derivative at (X,Y,Z).
The approximation, Landsburg says, is good "assuming that your contributions are small relative to the initial endowments". Here's the thing: why? Suppose Δx/X, Δy/Y and Δz/Z are indeed very small - what then? Why does it follow that the linear approximation works? There's no explanation, and if you think this is because it's immediately obvious - well, it isn't. It may sound plausible, but the math isn't there. We need to go deeper.
View more: Next