Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Should We Tell People That Giving Makes Them Happier?

11 peter_hurford 04 September 2013 09:22PM

Why do people give to charity?

It seems strange to even ask. Most people would point to the fact that they’re altruistic and want to make a difference. Others are concerned with inequality and justice. Another group points to the concept of “paying it forward” or repaying a debt to society. Other explanations cite various religious or social reasons.

Not too many people cite the fact that giving makes them happier. Even if people agree this is true, I don’t often hear it as people’s main reason. Instead, it’s more like a beneficial side effect. In fact, it seems pretty odd to me to hear someone boldly proclaim that they give only because it makes them happier, even if it might be true.

 

But if it’s true that giving does make people happier, should we be promoting that publicly and loudly?  Luke's article "Optimal Philanthropy for Human Beings" suggests that we should tell people to enjoy the happiness that giving brings.  Perhaps it might make a great opportunity to tap into groups who wouldn’t consider giving otherwise or have misconceptions that giving would make them miserable?

However, I’m a bit worried about how it might affect people’s incentives.  In this essay, I follow the evidence provided in the Harvard Business School working paper "Feeling Good About Giving: The Benefits (and Costs) of Self-Interested Charitable Behavior" by Lalin Anik, Lara B. Aknin, Michael I. Norton, and Elizabeth W. Dunn. Overall, in light of potential incentive effects, I think caution and further investigation is warranted when promoting the happiness side of giving.

 

Giving and Happiness

Giving What We Can has published its own review of research on happiness and giving and find a pretty strong connection. And it’s true -- lots of evidence confirms the connection and even indicates that it’s a causal relationship rather than a misleading correlation. In fact, it goes in both directions -- giving makes people happier and happier people are more likely to give[1].

Neurological studies of people found that people experienced pleasure when they saw money go to charity, even when it wasn’t their own, but experienced even more pleasure when they gave to charity directly[2], a conclusion that has been backed up with revealed preference tests in the lab[3, 4].

This connection has also been backed up in numerous experimental studies. Asking people to commit random acts of kindness can significantly increase self-reported levels of happiness compared to a control group[5]. Further research found that the amount people spent on gifts for others and donations to charity correlates with their self-reported happiness, while the amount they spent on bills, expenses, and gifts for themselves did not[6]. Additionally, people given money and randomly assigned to spend money on others were happier than those randomly assigned to spend the same amount of money on themselves[7].

 

Altering Incentives

People generally believe that spending on themselves will make them much happier than spending on others[6], which, given that this isn’t the case, means there is plenty of room for changing people’s minds. However, any social scientist or avid reader of Freakonomics knows that altering incentives can create unintended effects. So is there a potential harm in getting people to do more giving via advertising self-interested motive?

The classic example is that of the childcare center that had problems with parents who were late to pick up their children. They reasoned that if they charged fines, parents would stop being late, because they would have an economic incentive not to. They found instead, however, that introducing a fine actually created even more tardiness[8], presumably because what once was seen as rude and bad faith now could be made up for with a small economic cost. More surprisingly, the amount of lateness did not return to pre-fine levels even after the owners stopped the policy[9].

Other studies have found similar effects. A study of 3-5 year old nursery students who all initially seemed intrinsically interested in various activities were randomly put into three groups. One group made a pre-arranged deal to do a one of the activities in which they seemed interested in exchange for a reward, another group was surprised with a reward after doing the activity in question, and the third group was not rewarded at all. Those who were given an award upfront ended up significantly less intrinsically interested in the task than the other groups after the study was finished[10]. A similar study found that students who were interested in solving puzzles stopped solving those puzzles after a period ended where they were paid to solve puzzles[11].

 

In general, money and reminders of money tend to make people less pro-social[12]. This has also been found to some degree specifically in the world of charity. In a randomized field experiment, donors were encouraged to donate to disaster relief in the US and were randomly either enticed with an offer of donation matching or not. The study found that while people donated more often with the promise of donation matching, their contributions after the donation matching dropped below the control group, ending with a negative net effect overall[13].

Another study found that when gifts were sent out to donors, larger gifts resulted in a larger response rate of returned donations, but yielded a smaller average donation[14], though I suppose this could just be because more people who usually would give nothing were giving a small amount, bringing the average down. More importantly, this study found no net decrease in future donations after gifts were no longer sent out; instead, donations returned to their normal levels[14].

And certainly it’s worth noting some times when appeals to self-interest are successful. I couldn’t find any studies where this was the case. However, there is one anecdotal example: as Nick Cooney points out in "Self-Interest Can Make the World a Better Place -- For Animals, At Least", reduction in people eating factory farmed meat is coming almost entirely from people motivated not by concern for animal cruelty, but concern for their own health. Could advocating self-interested donations be the same as advocating health-motivated vegetarianism?

 

Opportunities for Further Investigation

It’s not very good to just let things be unclear if they don’t have to be, and I think we can resolve this issue with more scientific study. For example, one could randomly select one group to receive information about giving and happiness, another group to receive other standard arguments for giving, and a control group to receive no arguments or information about giving at all, and track their donation habits in a longitudinal study. This study would have it’s complications for sure, but could help see if information about giving and happiness backfires or not.

Or perhaps one could perform a field experiment. You could set up a booth asking people to donate to your cause and randomly include information about giving and happiness or not in your pitch and see how this affects immediate and long-term contributions. Doing this would have added advantages of being much quicker to run and not leading to people donating only because they think they’re being observed.

 

References

[1]: Anik, Lalin, Lara B. Aknin, Michael I. Norton, Elizabeth W. Dunn. 2009. “Feeling Good about Giving: The Benefits (and Costs) of Self-Interested Charitable Behavior”. Harvard Business School Working Paper 10-012.

[2]: Harbaugh, William T. 2007. "Neural Responses to Taxation and Voluntary Giving Reveal Motives for Charitable Donations." Science 316: 1622-1625.

[3]: Andreoni, James, William T. Harbaugh, and Lise Vesterlund. 2007. "Altruism in Experiments". New Palgrave Dictionary of Economics.

[4]: Mayr, Ulrich, William T. Harbaugh , and Dharol Tankersley. 2008. "Neuroeconomics of Charitable Giving and Philanthropy". In Glimcher, Paul W., Ernest Fehr, Colin Camerer, and Russel Alan Poldrack (eds.) 2009. Neuroeconomics: Decision Making and the Brain. Academic Press: London.

[5]: Lyubomirsky, Sonja, Kennon M. Sheldon, and David Schkade. 2005. "Pursuing Happiness: The Architecture of Sustainable Change." Review of General Psychology 9 (2): 111–131.

[6]: Akin, Lara B., et. al. 2010. "Pro-social Spending And Well-Being: Cross-Cultural Evidence for a Psychological Universal." National Bureau of Economic Research Working Paper #16415.

[7]: Dunn, Elizabeth W., Lara B. Aknin, and Michael I. Norton. 2008. “Spending Money on Others Promotes Happiness.” Science 319: 1687-1688.

[8]: Gneezy, Uri and Aldo Rustichini. 2000a. “A fine is a price.” Journal of Legal Studies 29: 1-18.

[9]: Gneezy, Uri and Aldo Rustichini. 2000b. “Pay enough or don't pay at all.” Quarterly Journal of Economics 115: 791-810.

[10]: Lepper, Mark R., David Greene, and Richard E. Nisbett. 1973. “Undermining Children's Intrinsic Interest with Extrinsic Reward: A Test of the ‘Overjustification’ Hypothesis.” Journal of Personality and Social Psychology 28(1): 129-137.

[11]: Deci, Edward L. 1971. “Effects of Externally Mediated Rewards on Intrinsic Motivation.” Journal of Personality and Social Psychology 18(1): 105-115.

[12]: Vohs, Kathleen D., Nicole L. Mead, and Miranda R. Goode. 2006. “The Psychological Consequences of Money”. Science 17 (314): 1154-1156.

[13]: Meier, Stephan. 2007. “Do Subsidies Increase Charitable Giving in the Long Run? Matching Donations in a Field Experiment”. Federal Reserve Bank of Boston Working Paper #06-18.

[14]: Falk, Armin. 2005. “Gift Exchange in the Field”. University of Bonn.

-

(This essay is also cross-posted on the Giving What We Can blog and my blog.)

Why I'm Skeptical About Unproven Causes (And You Should Be Too)

31 peter_hurford 29 July 2013 09:09AM

Since living in Oxford, one of the centers of the "effective altruism" movement, I've been spending a lot of time discussing the classic “effective altruism” topic -- where it would be best to focus our time and money.

Some people here seem to think that the most important thing we should be focusing our time and money on are speculative projects, or projects that promise a very high impact, but involve a lot of uncertainty.  One such very common example is "existential risk reduction", or attempts to make a long-term far future for humans more likely, say by reducing the chance of things that would cause human extinction.

I do agree that the far future is the most important thing to consider, by far (see papers by Nick Bostrom and Nick Beckstead).  And I do think we can influence the far future.  I just don't think we can do it in a reliable way.  All we have are guesses about what the far future will be like and guesses about how we can affect it. All of these ideas are unproven, speculative projects, and I don't think they deserve the main focus of our funding.

While I waffled in cause indecision for a while, I'm now going to resume donating to GiveWell's top charities, except when I have an opportunity to use a donation to learn more about impact.  Why?  My case is that speculative causes, or any cause with high uncertainty (reducing nonhuman animal suffering, reducing existential risk, etc.) requires that we rely on our commonsense to evaluate them with naīve cost-effectiveness calculations, and this is (1) demonstrably unreliable with a bad track record, (2) plays right into common biases, and (3) doesn’t make sense based on how we ideally make decisions.  While it’s unclear what long-term impact a donation to a GiveWell top charity will have, the near-term benefit is quite clear and worth investing in.

 

Focusing on Speculative Causes Requires Unreliable Commonsense

How can we reduce the chance of human extinction? It just makes sense that if we fund cultural exchange programs between the US and China, there will be more goodwill for the other within each country, and therefore the countries will be less likely to nuke each other. Since nuclear war would likely be very bad, it's of high value to fund cultural exchange programs, right?

Let's try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and -- FOOM! -- we have a superintelligence. It's important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?

Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?

These three examples are very common appeals to commonsense.  But commonsense hasn't worked very well in the domain of finding optimal causes.

 

Can You Pick the Winning Social Program?

Benjamin Todd makes this point well in "Social Interventions Gone Wrong", where he provides a quiz with eight social programs and asks readers to guess whether they succeeded or failed.

I'll wait for you to take the quiz first... doo doo doo... la la la...

Ok, welcome back. I don't know how well you did, but success on this quiz is very rare, and this poses problems for commonsense.  Sure, I'll grant you that Scared Straight sounds pretty suspicious. But the Even Start Family Literacy Program? It just makes sense that providing education to boost literacy skills and promote parent-child literacy activities should boost literacy rates, right? Unfortunately, it was wrong. Wrong in a very counter-intuitive way. There wasn't an effect.  

 

GiveWell and Commonsense's Track Record of Failure

Commonsense actually has a track record of failure. GiveWell has been talking about this for ages.  Every time GiveWell has found an intervention hyped by commonsense notions of high-impact and they've looked at it further, they've ended up disappointed.

The first was the Fred Hollows Foundation. A lot of people had been repeating the figure that the Fred Hollows Foundation could cure blindness for $50. But GiveWell found that number suspect.

The second was VillageReach. GiveWell originally put them as their top charity and estimated them as saving a life for under $1000. But further investigation kept leading them to revise their estimate until ultimately they weren't even sure if VillageReach had an impact at all.

Third, there is deworming. Originally, deworming was announced as saving a year of healthy life (DALY) for every $3.41 spent. But when GiveWell dove into the spreadsheets that resulted in that number, they found five errors. When the dust settled, the $3.41 figure was found to actually be off by a factor of 100. It was revised to $326.43.

Why shouldn't we expect this trend to not be the case in other areas where calculations are even looser and numbers are even less settled, like efforts devoted to speculative causes? Our only recourse is to fall back on interventions that are actually studied.

 

People Are Notoriously Bad At Predicting the (Far) Future

Cost-effectiveness estimates also frequently require making predictions about the future. Existential risk reduction, for example, requires making predictions about what will happen in the far future, and how your actions are likely to effect events hundreds of years down the road. Yet, experts are notoriously bad at making these kinds of predictions.

James Shanteau found in "Competence in Experts: The Role of Task Characteristics" (see also Kahneman and Klein's "Conditions for Intuitive Expertise: A Failure to Disagree") that experts perform well when thinking about static stimuli, thinking about things, and when there is feedback and objective analysis available. Furthermore, experts perform pretty badly when thinking about dynamic stimuli, thinking about behavior, and feedback and objective analysis are unavailable.

Predictions about existential risk reduction and the far future are firmly in the second category. So how can we trust our predictions about our impact on the far future? Our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction (or invest money in getting better at making predictions).

 

Even Broad Effects Require Specific Attempts

One potential resolution to this problem is to argue for “broad effects” rather than “specific attempts”.  Perhaps it’s difficult to know whether a particular intervention will go well or mistaken to focus entirely on Friendly AI, but surely if we improved incentives and norms in academic work to better advance human knowledge (meta-research), improved education, or advocated for effective altruism, the far future would be much better equipped to handle threats.

I agree that these broad effects would make the far future better and I agree that it’s possible to implement these broad effects and change the far future.  The problem, however, is it can’t be done in an easy or well understood way.  Any attempt to implement a broad effect would require a specific action that has an unknown expectation of success and unknown cost-effectiveness.  It’s definitely beneficial to advocate for effective altruism, but could this be done in a cost-effective way?  A way that’s more cost-effective at producing welfare than AMF?  How would you know?

In order to accomplish these broad effects, you’d need specific organizations and interventions to channel your time and money into.  And by picking these specific organizations and interventions, you’re losing the advantage of broad effects and tying yourself to particular things with poorly understood impact and no track record to evaluate. 

 

Focusing on Speculative Causes Plays Into Our Biases

We've now known for quite a long time that people are not all that rational. Instead, human thinking fails in very predictable and systematic ways.  Some of these ways make us less likely to take speculative causes seriously, such as ambiguity aversion, the absurdity heuristic, scope neglect, and overconfidence bias.

But there’s also a different side of the coin, with biases that might make people think badly about existential risk:

Optimism bias. People generally think things will turn out better than they actually will. This could lead people to think that their projects will have a higher impact than they actually will, which would lead to higher estimates of cost-effectiveness than is reasonable.

Control bias. People like to think they have more control over things than they actually do. This plausibly also includes control over the far future. Therefore, people are probably biased into thinking they have more control over the far future than they actually do, leading to higher estimates of ability to influence the future than is reasonable.

"Wow factor" bias. People seem attracted to more impressive claims. Saving a life for $2500 through a malaria bed net seems much more boring compared to the chance of saving the entire world by averting a global catastrophe. Within the Effective Altruist / LessWrong community, existential risk reduction is cool and high status, whereas averting global poverty is not. This might lead to more endorsement of existential risk reduction than is reasonable.

Conjunction fallacy.  People have a problem assessing probability properly when there are many steps involved, each of which has a chance of not happening. Ten steps, each with an independent 90% success rate, has only a 35% chance of success.  Focusing on the far future seems to involve that a lot of largely independent events happen the way that is predicted. This would mean people are worse at estimating their chances of helping the far future, creating higher cost-effectiveness estimates than is reasonable.

Selection bias.  When trying to find trends in history that are favorable for affecting the far future, some examples can be provided.  However, this is because we usually hear about the interventions that end up working, whereas all the failed attempts to influence the far future are never heard of again.  This creates a very skewed sample that can negatively bias our thinking about our success of influencing the far future.

 

It’s concerning there are numerous biases both weighted in favor and weighted against speculative causes, and this means we must tread carefully when assessing their merits.  However, I would strongly expect biases to be even worse in favor of speculative causes rather than against them, because speculative causes lack the available feedback and objective evidence needed to help insulate against bias, whereas a focus on global health does not.

 

Focusing on Speculative Causes Uses Bad Decision Theory

Furthermore, not only is the case for speculative causes undermined by a bad track record and possible cognitive biases, but the underlying decision theory seems suspect in a way that's difficult to place.         

 

Would you play a lottery with no stated odds?

Imagine another thought experiment -- you're asked to play a lottery. You have to pay $2 to play, but you have a chance at winning $100. Do you play?

Of course, you don't know, because you're not given odds. Rationally, it makes sense to play any lottery where you expect to come out ahead more often than not. If the lottery is a coin flip, it makes sense to pay $2 to have a 50/50 shot to win $100, since you'd expect to win $50 on average, and come ahead $48 each time. With a sufficiently high reward, even a one in a million chance is worth it. Pay $2 for a 1/1M chance of winning $1B, and you'd expect to come out ahead by $998 each time.

But $2 for the chance to win $100, without knowing what the chance is? Even if you had some sort of bounds, like you knew the odds had to be at least 1/150 and at most 1/10, though you could be off by a little bit. Would you accept that bet?

Such a bet seems intuitively uninviting to me, yet this is the bet that speculative causes offer me.

 

"Conservative Orders of Magnitude" Arguments

In response to these considerations, I've seen people endorsing speculative causes look at their calculations and remark that even if their estimate were off by 1000x, or three orders of magnitude, they still would be on solid ground for high impact, and there's no way they're actually off by three orders of magnitude. However, Nate Silver's The Signal and the Noise: Why So Many Predictions Fail — but Some Don't offers a cautionary tale:

Moody’s, for instance, went through a period of making ad hoc adjustments to its model in which it increased the default probability assigned to AAA-rated securities by 50 percent. That might seem like a very prudent attitude: surely a 50 percent buffer will suffice to account for any slack in one’s assumptions? It might have been fine had the potential for error in their forecasts been linear and arithmetic. But leverage, or investments financed by debt, can make the error in a forecast compound many times over, and introduces the potential of highly geometric and nonlinear mistakes.

Moody’s 50 percent adjustment was like applying sunscreen and claiming it protected you from a nuclear meltdown—wholly inadequate to the scale of the problem. It wasn’t just a possibility that their estimates of default risk could be 50 percent too low: they might just as easily have underestimated it by 500 percent or 5,000 percent. In practice, defaults were two hundred times more likely than the ratings agencies claimed, meaning that their model was off by a mere 20,000 percent.

Silver points out that when estimating how safe mortgage backed securities were, the difference between assuming defaults are perfectly uncorrelated and defaults are perfectly correlated is a difference of 160,000x in your risk estimate -- or five orders of magnitude.

If these kinds of five-orders-of-magnitude errors are possible in a realm that has actual feedback and is moderately understood, how do we know the estimates for cost-effectiveness are safe for speculative causes that are poorly understood and offer no feedback?  Again, our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction.

 

Value of Information, Exploring, and Exploiting

Of course, there still is one important aspect of this problem that has not been discussed -- value of information -- or the idea that sometimes it’s worth doing something just to learn more about how the world works.  This is important in effective altruism too, where we focus specifically on “giving to learn”, or using our resources to figure out more about the impact of various causes.

I think this is actually really important and is not a victim to any of my previous arguments, because we’re not talking about impact, but rather learning value.  Perhaps one could look to an "explore-exploit model", or the idea that we achieve the best outcome when we spend a lot of time exploring first (learning more about how to achieve better outcomes) before exploiting (focusing resources on achieving the best outcome we can).  Therefore, whenever we have an opportunity to “explore” further or learn more about what causes have high impact, we should take it.

 

Learning in Practice

Unfortunately, in practice, I think these opportunities are very rare.  Many organizations that I think are “promising” and worth funding further to see what their impact looks like do not have sufficiently good self-measurement in place to actually assess their impact or sufficient transparency to provide that information, therefore making it difficult to actually learn from them.  And on the other side of things, many very promising opportunities to learn more are already fully funded.  One must be careful to ensure that it’s actually one’s marginal dollar that is getting marginal information.

 

The Typical Donor

Additionally, I don’t think the typical donor is in a very good position to assess where there is high value of information or have the time and knowledge to act upon this information once it is acquired.  I think there’s a good argument for people in the “effective altruist” movement to perhaps make small investments in EA organizations and encourage transparency and good measurement in their operations to see if they’re successfully doing what they claim (or potentially create an EA startup themselves to see if it would work, though this carries large risks of further splitting the resources of the movement).

But even that would take a very savvy and involved effective altruist to pull off.  Assessing the value of information on more massive investments like large-scale research or innovation efforts would be significantly more difficult, beyond the talent and resources of nearly all effective altruists, and are probably left to full-time foundations or subject-matter experts.

 

GiveWell’s Top Charities Also Have High Value of Information

As Luke Muehlhauser mentions in "Start Under the Streetlight, Then Push Into the Shadows", lots of lessons can be learned only by focusing on the easiest causes first, even if we have strong theoretical reasons to expect that they won’t end up being the highest impact causes once we have more complete knowledge.

We can use global health cost-effectiveness considerations as practice for slowly and carefully moving into the more complex and less understood domains.  There even are some very natural transitions, such as beginning to look at "flow through effects" of reducing disease in the third-world and beginning to look at how more esoteric things affect the disease burden, like climate change.  Therefore, even additional funding for GiveWell’s top charities has high value of information.  And notably, GiveWell is beginning this "push" through GiveWell Labs.

 

Conclusion

The bottom line is that sometimes things look too good to be true.  Therefore, I should expect that the actual impact of speculative causes that make large promises, upon a thorough investigation, will be much lower.

And this has been true in other domains. People are notoriously bad at estimating the effects of causes in both the developed world and developing world, and those are the causes that are near to us, provide us with feedback, and are easy to predict. Yet, from the Even Start Family Literacy Program to deworming estimates, our commonsense has failed us.

Add to that the fact that we should expect ourselves to perform even worse at predicting the far future. Add to that optimism bias, control bias, "wow factor" bias, and the conjunction fallacy, which make it difficult for us to think realistically about speculative causes. And then add to that considerations in decision theory, and whether we would bet on a lottery with no stated odds.

When all is said and done, I'm very skeptical of speculative projects.  Therefore, I think we should be focused on exploring and exploiting.  We should do whatever we can to fund projects aimed at learning more, when those are available, but be careful to make sure they actually have learning value.  And when exploring isn’t available, we should exploit what opportunities we have and fund proven interventions.

But don’t confuse these two concepts and fund causes intended for learning because of their actual impact value.  I’m skeptical about these causes actually being high impact, though I’m open to the idea that they might be and look forward to funding them in the future when they become better proven.     

-

Followed up in: "What Would It Take To 'Prove' A Skeptical Cause" and "Where I've Changed My Mind on My Approach to Speculative Causes".

This was also cross-posted to my blog and to effective-altruism.com.

I'd like to thank Nick Beckstead, Joey Savoie, Xio Kikauka, Carl Shulman, Ryan Carey,  Tom Ash, Pablo Stafforini, Eliezer Yudkowsky, and Ben Hoskin for providing feedback on this essay, even if some of them might strongly disagree with it's conclusion.

Giving Now Currently Seems to Beat Giving Later

1 peter_hurford 19 June 2013 06:40PM

Abstract: There is a debate between either donating now or donating later (investing, realizing the returns of one's investment, and donating in a lumpsum upon death).  While donating later may be appropriate in some circumstances, right now we have the ability to donate now in order to go meta and recruit future donors who otherwise wouldn't have donated, thus adding more money than returns on investment.

-

Introduction

Among those people interested in doing as much as they can to make the world a better place and think donating their money as effectively as possible is a good way to do that, there is a debate about whether one should either (a) donate a specific portion of one's income in small installments each year or (b) invest one's income and then donate as much as possible in one large lump sum right before death.  Of course, there are other positions -- like donate every month or donate every decade, but in a very relevant sense the debate is between two options: "give now" vs. "give later".

In this essay, I will defend the "give now" camp and rebut the "give later" camp, thus explaining why I will continue to donate once a year, though think any sort of regular time period smaller than each year is probably also acceptable.

 

But First, Why Give Later?

The biggest champ for the "give later" crowd is Robin Hanson, who makes his view known most recently in "If More Now, Less Later":

But at the margin, a person who saves another dollar, or chooses not to borrow another dollar, must typically expect the financial returns from their investments will help them more in the future than will such indirect effects of spending today. In fact, they should expect this savings will benefit their future self more than any of these other ways of spending today. After all, why give up money today if that both gives you less to spend today, and gives you less in the future? So there wouldn’t be any savings, or less than maximal borrowing, if people didn’t expect more gains later from saving than from spending today.

This implies that unless charity recipients are saving nothing and borrowing as much as they possibly can, they must expect that you would benefit them more in the future by saving and giving them the returns of your savings later, than if you had given them the money today, even after taking into account all of the ways in which their spending today might help them in the future. So there really must be a tradeoff between helping today and helping later; if you help more today, you help less in the future. At least if you help them in a way they could have helped themselves, if only they had the money.

Or, more basically, there are two concepts:

First, there is the "growth rate" of donation -- by donating to a non-profit now, they get to do things with your money now, helping people immediately who then could be in a better position to give back, or by spending money on dreaded "overhead" (which isn't evil, by the way, but that's a matter for another time), putting them in a better position to grow now.

Second, there is the "investment rate" of saving/investing your donation money rather than donating now.  This should be the easier of the two concepts -- you save your money, it grows in a certain percentage, and then right before you're about to die, you take all that money out and donate it.

And then to oversimpify the complex topic, you should be willing to donate now as long as you think "growth rate > investment rate", and save now as long as you think "investment rate > growth rate", plus a few other complications to be discussed.

 

Ok, So What's The Investment Rate, Then?

Finding out the precise investment rate is complicated, but I think Jeff Kaufman has done some really good work on it in "What Rates of Return Should You Expect?".  Basically, many sites suggest returns on investment between 6% and 12%, but this (a) ignores the average of 3% inflation, (b) involves cherry-picking, and (c) only focuses on US data.  Articles like The Economist's "Beware of the Bias" point this out.

So if we're really optimistic, we could consider the investment rate to be 12% and if we're really pessimistic, we could consider it to be 0%, or lower!  But I'd expect the real (inflation adjusted) rate of return on your investment to be between 3 and 5%.  This means that if I had opted to instead take the $659.70 donation I made last year and instead invested in it, given that I expect to live around 70 more years, I could realistically realize $5,372.94 to $21,687.36 if compounded monthly.

 

Ok, So What's The Growth Rate, Then?

But what about the other side of the coin -- growth rate from your donation?  I see the potential size of this being huge.  Unlike the investment rate which I think is overestimated, the growth rate seems underestimated.

Honestly, whatever the growth rate is on your donation is going to depend largely and entirely on where you donate.  But if you're donating to advocacy, you could realize really large returns because of what's called the haste consideration:

[I]magine two worlds:

(1) You don’t do anything altruistic for the next 2 years and then you spend the rest of your life after that improving the world as much as you can.

(2) You spend the next 2 years influencing people to become effective altruists and convince one person who is at least as effective as you are at improving the world. (And assume that this person wouldn’t have done anything altruistic otherwise.) You do nothing altruistic after the next 2 years, but the person you convinced does at least as much good as you did in (1).

By stipulation, world (2) is improved at least as much as world (1) is because, in (2), the person you convinced does at least as much good as you did in (1).

 

Essentially, convincing just one person to do what you would do (join Giving What We Can, donate to existential risk, do anti-aging research, etc.) as ardently as you would do it (or nearly as much) would have the same effect as your entire life's work.  Robin Hanson stated the case for giving later over giving now as "if more now, less later".

But with the haste consideration, it's actually if more now, more later, because you'll be convincing people to do things they would otherwise have not done at all (presumably).

And you can donate money to convincing people to do those things.  A donation to Giving What We Can can fund their media outreach and recruit more people to the idea of donating effectively.  A donation to MIRI could fund more salaries and recruitment efforts, etc.

Thus, let's assume that, in my lifetime, I'll earn at least a $50K/year salary.  Now assume that it costs a conservative $10K/year to recruit a Giving What We Can lifetime, committed member, who will also earn at least a $50K/year salary through outreach.  (Feel free to quibble with these assumptions if you'd like, but don't risk missing the main point.)

Now consider that I can either:

(1) invest 10% of my salary each year at 5% return compounded monthly, and then, upon my death, "buy" as many committed GWWC members as possible.

(2) donate 10% of my salary each year to buy as many committed GWWC members as possible.

According to this investment calculator (which I used earlier), option #1 will yield $3,200,782.84, which could buy 320 committed GWWC members.

Option #2 would add a committed member every two years, producing 45 committed GWWC members over my predicted 70 year lifespan.  This looks bad, but now remember that each of those members would then get a chance to recruit new members using their money.  The first person I get would then commit $340K of their own lifetime earnings over the remaining period (68 years, since I recruited them on year 2, at $5K/year contribution).  I'd only need to recruit ten members myself to dwarf the investment strategy, and that's not even counting all the second-stage members those first-stage members recruit.

At this point, the group effectively doubles over the two year period.  I save up to $10K at year two and recruit someone, and then in two years (year #4) we both have $10K and recruit someone, and then in two more years (year #6) all four of us recruit have $10K and recruit someone each, bringing the total amount of people recruited to twelve.  By year #70, I will have recruited, directly and indirectly, 1+2^34 people, or 17.1 billion new GWWC members.

 

Obviously, this is implausible because there aren't even 17 billion people and recruitment efforts would certainly be nonlinear. And there are other problems.  But it's a simple way to prove the point.  Hopefully this shows the benefit of compounding via the haste consideration to recruit new people who wouldn't have donated their money otherwise.

Even if Option #1 were considered to recruit people who then saved all their money and recruited other people, by year #140 it would only have 102,400 people recruited, whereas Option #2 would have roughly 10^41 people recruited.

While the growth rate probably isn't 100% every two years, it's considerably higher than 5%, I think.  Thus since growth rate > interest rate, I advocate donating now.  At least until someone finds the inevitable unobvious flaw in my thinking.

Of course, we'll probably hit diminishing marginal returns on GWWC recruiting soon, and thus should reconsider our donation target, re-evaluate the growth rate, and maybe shift back to investing rather than donating in the future.  But, right now, the prospects are still high and there are still many bright and active individuals who have not yet been recruited.  Right now, we can use our money to make sure they use their money better.

Thus my answer to Robin Hanson: More (of our money) now, more (of their money) later.  At least, while it lasts.

 

Other Complications

Now time to explore some additional factors that may modify the consideration.  The main argument has been made and this is is basically appendix material.  Feel free to stop reading at this point, if you wish.

US Tax Law

This is an obvious factor to consider, but US law says that if you donate instead of invest, you can claim a deduction.  This deduction would appear annually, thus allowing you to donate additional money.  If you're earning the 50K I was talking about earlier, you'd be taxed at effectively 16.9%.  This would mean you could get back $845 on a $5K donation, assuming I understand tax law right.

If I donate $5K annually without investing, I'd actually be able to donate $409,150 over my seventy years rather than the original $350,000.  If I had invested and then donated it all, I'd get $3,741,715.14 instead of $3,200,782.84.

The Changing Nature of Non-Profits

There's a good chance that if Giving What We Can stopped receiving donations, it would just collapse and then the opportunity to donate to it in seventy years would be gone (though I suppose it could be refounded with the new cash).  Additionally, GWWC couldn't learn from the seventy years of being able to operate, spend money, and try things.

And GWWC seems to be in a prime place right now to expand a lot and really grow it's outreach.  Thus it's at a critical time to fund when the marginal cost to recruit a new member is at probably some of the lowest it will be.  Now, not in seventy years, is a great time to get in.

Furthermore, I speculate that the marginal value will grow at a rate higher than the interest, such that $5K/year for seventy years would buy more members for GWWC than a flat $3.7 million in seventy years would.

You Might Be Smarter In The Future

There's a good chance there might be better giving opportunities in seventy years, identified through our collective greater intelligence, your years of added experience, and the changing world (though GiveWell disagrees, as does GWWC).  This could mean it is better to give later, because you could be donating to an overall higher impact opportunity.

You Might Change Your Values

Some people are concerned that if they stored their donation money for seventy years, their future self might not care to donate it anymore.  Personally, the way I view my personal identity, I'd rather have done what my future self wanted me to do, so if my future self seventy years from now wished I would have been less altruistic, I want to be less altruistic now, so I don't see this as a concern.  But if you do, you should donate now, before your future self has a chance to redirect the money.

One way to protect from this might be to use a donor advised fund, which would also get the benefit of more charity deductions (since you can deduct each contribution to your fund) and your future wisdom while forcing your future self to donate.

Better Advocacy

It seems much easier to convince someone to give a little each year than to follow a solid savings plan -- as Scott Alexander states, there are some strange psychological effects that make us feel strongly like giving now and giving regularly is better, and we can secure the warm fuzzies feelings and social status of being considered a regular donor.

Moreover, if you convinced a friend to donate, you'd know much more immediately if they actually plan on sticking to it if you can see a donation they make in December rather than having to wait until they die.  (Though I suppose this too could be solved by having them contribute to your donor advised fund, if they trust you a lot.  But I imagine the average person wouldn't.)

Thus even if privately you should save, I think the best strategy to adopt publicly (when not speaking from an analytical point-of-view) is to encourage others to donate now.

-

Also cross-posted on my blog, on Felicifia, and on the Effective Altruist blog.

Effective Altruism Through Advertising Vegetarianism?

18 peter_hurford 12 June 2013 06:50PM

Abstract: If you value the welfare of nonhuman animals from a consequentialist perspective, there is a lot of potential for reducing suffering by funding the persuasion of people to go vegetarian through either online ads or pamphlets.  In this essay, I develop a calculator for people to come up with their own estimates, and I personally come up with a cost-effectiveness estimate of $0.02 to $65.92 needed to avert a year of suffering in a factory farm.  I then discuss the methodological criticism that merits skepticism of this estimate and conclude by suggesting (1) a guarded approach of putting in just enough money to help the organizations learn and (2) the need for more studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, that include decent control groups.

-

Introduction

I start with the claim that it's good for people to eat less meat, whether they become vegetarian -- or, better yet, vegan -- because this means less nonhuman animals are being painfully factory farmed.  I've defended this claim previously in my essay "Why Eat Less Meat?".  I recognize that some people, even those who consider themselves effective altruists, do not value the well-being of nonhuman animals.  For them, I hope this essay is interesting, but I admit it will be a lot less relevant.

The second idea is that it shouldn't matter who is eating less meat.  As long as less meat is being eaten, less animals will be farmed, and this is a good thing.  Therefore, we should try to get other people to also try and eat less meat.

The third idea is that it also doesn't matter who is doing the convincing.  Therefore, instead of convincing our own friends and family, we can pay other people to convince people to eat less meat.  And this is exactly what organizations like Vegan Outreach and The Humane League are doing.  With a certain amount of money, one can hire someone to distribute pamphlets to other people or put advertisements on the internet, and some percentage of people who receive the pamphlets or see the ads will go on to eat less meat.  This idea and the previous one should be uncontroversial for consequentialists.

But the fourth idea is the complication.  I want my philanthropic dollars to go as far as possible, so as to help as much as possible.  Therefore, it becomes very important to try and figure out how much money it takes to get people to eat less meat, so I can compare this to other estimations and see what gets me the best "bang for my buck".


Other Estimations

I have seen other estimates floating around the internet that try to estimate the cost of distributing pamphlets, how many conversions each pamphlet produces, and how much less meat is ate via each conversion.  Brian Tomasik calculates $0.02 to $3.65 [PDF] per year of nonhuman animal suffering prevented, later $2.97 per year, and then later $0.55 to $3.65 per year.

Jess Whittlestone provides statistics that reveal an estimate of less than a penny per year[1]. 

Effective Animal Activism, a non-profit evaluator for animal welfare charities, came up with an estimate [Excel Document] of $0.04 to $16.60 per year of suffering averted, that also takes into account a variety of additional variables, like product elasticity.

Jeff Kaufman uses a different line of reasoning, by estimating how many vegetarians there are and guessing how many of them came via pamphlets, estimates it would take $4.29 to $536 to make someone vegetarian for one year.  Extrapolating from that using at a rate of 255 animals saved per year and a weighted average of 329.6 days lived per animal (see below for justification of both assumptions), would give $0.02 to $1.90 per year of suffering averted[2].

A third line of reasoning, also by Jeff Kaufman, was to measure the amount of comments on the pro-vegetarian websites advertised in these campaigns and found that 2-22% of them were about an intended behavior change (eating less meat, going vegetarian, or going vegan), depending on the website.  I don't think we can draw any conclusions from this, but it's interesting.

To make my calculations, I decided to make a calculator.  Unfortunately, I can't embed it here, so you'd have to open it in a new tab as a companion piece.

I'm going to start by using the following formula: Years of Suffering Averted per Dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal)

Now, to get estimations for these variables.


Pamphlets Per Dollar

How much does it cost to place the advertisement, whether it be the paper pamphlet or a Facebook advertisement?  Nick Cooney, head of the Humane League, says the cost-per-click of Facebook ads is 20 cents.

But what about the cost per pamphlet?  This is more of a guess, but I'm going to go with <a href="">Vegan Outreach's suggested donation of $0.13 per "Compassionate choices" booklet.

However, it's important to note that this cost must also include opportunity cost -- leafleters must forego the ability to use that time to work a job.  This means I must include an opportunity cost of say $8/hr on top of that, making the actual cost $0.27 assuming a pamphlet is given out each minute of volunteer time, meaning 3.7 people are reached per dollar from pamphlets.  For Facebook advertisements, the opportunity cost is trivial.


Conversions Per Pamphlet

This is the estimate with the biggest target on it's head, so to speak.  How many people do we get to actually change their behavior with a simple pamphlet or Facebook advertisement?  Right now, we have three lines of evidence:

Facebook Study

Humane League did A $5000 Facebook advertisement campaign.  They bought ads that look like this...

 

...and sent people to websites (like this one or this one) with auto-playing videos that start playing and show the horrors of factory farming.

Afterward, there was another advertisement run to people who "liked" the video page, offering a 1 in 10 chance of winning a free movie ticket in order to take a survey.  Everyone who emailed in asking for a free vegetarian starter kit were also emailed a survey.  104 people took the survey and there were 32 reported vegetarians[3] and 45 people reported, for example, that their chicken consumption decreased "slightly" or "significantly".

7% of visitors liked the page and 1.5% of visitors ordered a starter kit.  Assuming all the other people went away from the video not changing their consumption, this survey would lead us to (very tenuously) think about 2.6% of people seeing the video will become a vegetarian[4].

(Here's the results of the survey in PDF.)

Pamphlet Study

A second study discussed in "The Powerful Impact of College Leafleting (Part 1)" and "The Powerful Impact of College Leafleting: Additional Findings and Details (Part 2)" looked specifically at pamphlets.

Here, Humane League staff visited two large East Coast state schools and distributed leaflets.  They then returned two months later and surveyed people walking by.  Those who remember receiving a leaflet earlier were counted.  They found about 2% of those receiving a pamphlet went vegetarian.

Vegetarian Years Per Conversion

But once a pamphlet or Facebook advertisement captures someone, how long will they stay vegetarian?  One survey showed vegetarians refrain from eating meat for an average of 6 years or more.  Another study I found says 93% of vegetarians stay vegetarian for at least three years.

 

Animals Saved Per Vegetarian Year

And once you have a vegetarian, how many animals do they save per year?  CountingAnimals says 406 animals saved per year.

The Humane League suggests 28 chickens, 2 egg industry hens, 1/8 beef cow, 1/2 pig, 1 turkey, and 1/30 dairy cow per year (total = 31.66 animals), and does not provide statistics on fish.  This agrees with CountingAnimals on non-fish totals.

Days Lived Per Animal

One problem, however, is that saving a cow that could suffer for years is different from saving a chicken that suffers for only about a month.  Using data from Farm Sanctuary plus World Society for the Protection of Animals data on fish [PDF], I get this table:

Animal Number Days Alive
Chicken (Meat) 28 42
Chicken (Egg) 2 365
Cow (Beef) 0.125 365
Cow (Milk) 0.033 1460
Fish 225 365

This makes the weighted average 329.6 days[5].

 

Accounting For Biases

As I said before, our formula was Years of Suffering Averted = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal).

Let's plug these values in... Years of Suffering Averted per Dollar = 5 * 0.02 * 3 * 255.16 * 329.6/365 = 69.12.

Or, assuming all this is right (and that's a big assumption), it would cost less than 2 cents to prevent a year of suffering on a factory farm by buying vegetarians.

I don't want to make it sound like I'm beholden to this cost estimate or that this estimate is the "end all, be all" of vegan outreach.  Indeed, I share many of the skepticisms that have been expressed by others.  The simple calculation is... well... simple, and it needs some "beefing up", no pun intended.  Therefore, I also built a "complex calculator" that works on a much more complex formula[6] that is hopefully correct[7] and will provide a more accurate estimation.

 

The big, big deal for the surveys is concern for bias.  The most frequently mentioned bias is social desirability bias, or people who say they reduced meat just because they want to please the surveyor or look like a good person, which actually happens a lot more on surveys than we'd like.

To account for this, we'll have to figure out how inflated answers are because of this bias and then scale the answers down by that amount.  Nick Cooney who says that he's been reading studies that about 25% to 50% of people who say they are vegetarian actually are, though I don't yet have the citations.  Thus, if we find out that an advertisement creates two meat reducers, we'd scale that down to one reducer if we're expecting a 50% desirability bias.

 

The second bias that will be a problem for us is non-response bias, as those who don't reduce their diet are less likely to take the survey and therefore less likely to be counted.  This is especially true in the Facebook study, which only measures people who "liked" or requested a starter kit, showing some pro-vegetarian affiliation.

We can balance this out by assuming everyone who didn't take the survey went on to have no behavior change whatsoever.  Nick Cooney's Facebook Ad Survey is for the 7% of people who liked the page (and then responded to the survey), and obviously those who liked the page are more likely to reduce their consumption.  I chose an optimistic value of 90% to consider the survey completely representative of the 7% who liked the page, and then a bit more for those who reduced their consumption but did not like the page.  My pessimistic value was 95%, assuming everyone who did not like the survey went unchanged and assuming a small response bias among those who liked the page but chose not to take the survey.

For the pamphlets, however, there should be no response bias since the entire population of college students was surveyed from randomly, and no one was said to reject taking the survey.

 

Additional People Are Being Reached

In the Facebook survey, those who said they reduced their meat consumption were also asked if they influenced any of their friends and family to also reduce eating meat, and found that they usually produced 0.86 additional reducers.

This figure seems very high, but I do strongly expect the figure to be positive -- people who reduce eating meat will talk about it sometimes, essentially becoming free advertisements.  I'd be very surprised if they ended up being a net negative.

 

Accounting for Product Elasticity

Another way to boost the effectiveness of the estimate is to be more accurate about what happens when someone stops eating meat.  The change isn't from the actual refusal to eat, but rather from the reduced demand for meat, which leads to a reduced supply.  Following the laws of economics, however, this reduction won't necessarially be one-for-one, but rather depend on the elasticity of product demand and supply.  By getting this number, we can find out how much meat is reduced for every meat not demanded.

My guesses in the calculator come from the following sources, some of which are PDFs: Beef #1Beef #2Dairy #1Dairy #2Pork #1, Pork #2Egg #1, Egg #2PoultrySalmon, and for all fish.

 

Putting It All Together

Implementing the formula on the calculator, we end up with an estimate of $0.03 to $36.52 to reduce one year of suffering on a factory farm based on the Facebook ad data and an estimate of $0.02 to $65.92 based on the pamphlet data.

Of course, many people are skeptical of these figures.  Perhaps surprisingly, so am I.  I'm trying to strike a balance between being an advocate of vegan outreach as a very promising path for making the world a better place, while not losing sight of the methodological hurdles that have not yet been met, and open to the possibility that I'm wrong about this.

The big methodological elephant in the room is that my entire cost estimate depends on having a plausible guess for how likely someone is to change their behavior based on seeing an advertisement.

I feel slightly reassured because:

  1. There are two surveys for two different media, and they both provide estimates of impact that agree with each other.
  2. These estimates also match anecdotes from leafleters about approximately how many people come back and say they went vegetarian because of a pamphlet.
  3. Even if we were to take the simple calculator and drop the "2% chance of getting four years of vegetarianism" assumption down to, say, a pessimistic "0.1% chance of getting one year" conversion rate, the estimate is still not too bad -- $0.91 to avert a year of suffering.
  4. More studies are on the way.  Nick Cooney is going to do a bunch more to study leaflets, and Xio Kikauka and Joey Savoie have publicly published some survey methodology [Google Docs].

That said, the possibility for desirability bias in the survey is a large concern as long as the surveys continue to be from overt animal welfare groups and continue to clearly state that they're looking for reductions in meat consumption.

Also, so long as surveys are only given to people that remember the leaflet or advertisement, there will be a strong possibility of response bias, as those who remember the ad are more likely to be the ones who changed their behavior.  We can attempt to compensate for these things, but we can only do so much.

Furthermore, and more worrying, there's a concern that the surveys are just measuring normal drift in vegetarianism, without any changes being attributable to the ads themselves.  For example, imagine that every year, 2% of people become vegetarians and 2% quit.  Surveying these people at random and not capturing those who quit will end up finding a 2% conversion rate.

How can we address these?  I think all three problems can be solved with a decent control group, whether it be a group of people that receive a leaflet not about vegetarianism, or no leaflet at all.  Luckily, Kikauka and Savoie's survey intend to do just that.

Jeff Kaufman has a good proposal for a survey design I'd like to see implemented in this area.

 

Market Saturation and Diminishing Marginal Returns?

Another concern is that there are diminishing marginal returns to these ads.  As the critique goes, there are only so many people that will be easily swayed by the advertisement, and once all of them are quickly reached by Facebook ads and pamphlets, things will dry up.

Unlike the others, I don't think this criticism works well.  After all, even if it were true, it still would be worthwhile to take the market as far as it will go, and we can keep monitoring for saturation and find the point where it's no longer cost-effective.

However, I don't think the market has been tapped up yet at all.  According to Nick Cooney [PDF], there are still many opportunities in foreign markets and outside the young, college kid demographic.

 

The Conjunction Fallacy?

The conjunction fallacy is a classic fallacy that reminds us that no matter what, the chance of event A happening can never be smaller than the chance of event A happening, followed by event B.  For example, the probability that Linda is a bank teller will always be larger than (or equal to) the probability that Linda is a bank teller and a feminist.

What does this mean for vegetarian outreach?  Well, for the simple calculator, we're estimating five factors.  In the complex calculator, we're estimating 90 factors.  Even if each factor is 99% likely to be correct, the chance that all five are right is 95%, and the chance that all 50 are right is only 60%.  If each factor is only 90% likely to be correct, the complex calculator will be right with a probability of 0.5%!

This is a cause for concern, but I don't think there's any way around this.  It's just an inherent problem with estimation.  Hopefully we'll be balanced by (1) using the different bounds and (2) hoping underestimates and overestimates will cancel each other out.

 

Conversion and The 100 Yard Line

Something we should take into account that helps the case for this outreach rather than hurts it is the idea that conversions aren't binary -- someone can be pushed by the ad to be more likely to reduce their meat intake as opposed to fully converted.  As Brian Tomasik puts it:

Yes, some of the people we convince were already on the border, but there might be lots of other people who get pushed further along and don’t get all the way to vegism by our influence. If we picture the path to vegism as a 100-yard line, then maybe we push everyone along by 20 yards. 1/5 of people cross the line, and this is what we see, but the other 4/5 get pushed closer too. (Obviously an overly simplistic model, but it illustrates the idea.)

This would be either very difficult or outright impossible to capture in a survey, but is something to take into account.

 

Three Places I Might Donate Before Donating to Vegan Outreach

When all is said and done, I like the case for funding this outreach.  However, I think there are three other possibilities along these lines that I find more promising:

Funding the research of vegan outreach: There needs to be more and higher-quality studies of this before one can feel confident enough in the cost-effectiveness of this outreach.  However, initial results are very promising, and the value of information of more studies is therefore very high.  Studies can also find ways to advertise more effectively, increasing the impact of each dollar spent.  Right now, however, it looks like all ongoing studies are fully funded, but if there were opportunities to fund more, I would jump on it.

Funding Effective Animal Activism: EAA is an organization pushing for more cost-effectiveness in the domain of nonhuman animal welfare and is working to further evaluate what opportunities are the best, Givewell-style.  Giving them more money can potentially attract a lot more attention to this outreach, and get it more scrutiny, research, and money down the line.

Funding Centre for Effective Altruism: Overall, it might just be better to get more people involved in the idea of giving effectively, and then getting them interested in vegan outreach, among other things.

 

Conclusion

Vegan outreach is a promising, though not fully studied, method of outreach that deserves both excitement and skepticism.  Should one put money into it?  Overall, I'd take a guarded approach of putting in just enough money to help the organizations learn, develop better cost-effective measurements and transparency, and become more effective.  It shouldn't be too long before this area will become studied well enough to have good confidence in how things are doing.

More studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, with decent control groups.

I look forward to seeing how this develops.  Don't forget to play around with my calculator.

-

 

Footnotes

[1]: Cost effectiveness in years of suffering prevented per dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Years lived / animal).

Plugging in 80K's values... Cost effectiveness = (Pamphlets / dollar) * 0.01 to 0.03 * 25 * 100 * (Years lived / animal)

Filling in the gaps with my best guesses... Cost effectiveness = 5 * 0.01 to 0.03 * 25 * 100 * 0.90 = 112.5 to 337.5 years of suffering averted per dollar
I personally think 25 veg-years per conversion on average is possible but too high; I personally err from 4 to 7.
[2]: I feel like there's an error in this calculation or that Kaufman might disagree with my assumptions of number of animals or days per animal, because I've been told before that these estimates with this method are supposed to be about an order of magnitude higher than other estimates.  However, I emailed Kaufman and he seemed to not find any fault with the calculation, though he does think the methodology is bad and the calculation should not be taken at face value.
[3]: I calculated the number of vegetarians by eyeballing about how many people said they no longer eat fish, which I'd guess only a vegetarian would be willing to give up.
[4]: 32 vegetarians / 104 people = 30.7%.  That population is 8.5% (7% for likes + 1.5% for the starter kit) of the overall population, leading to 2.61% (30.7% * 8.5%).
[5]: Formula is [(Number Meat Chickens)(Days Alive) + (Number Egg Chickens)(Days Alive) + (Number Beef Cows)(Days Alive) + (Number Milk Cows)(Days Alive) + (Number Fish)(Days Alive)] / (Total Number Animals).  ...Plugging things in: [(28)(42) + (2)(365) + (0.125)(365) + (0.033)(1460) + (225)(365)] / 255.16] = 329.6 days

[6]:
Cost effectiveness in amount of days prevented per dollar = (People Reached / Dollar + (People Reached / Dollar * Additional People Reached / Direct Reach * Response Bias * Desirability Bias)) * Years Spent Reducing * (((Percent Increasing Beef * Increase Value) + (Percent Staying Same with Beef * Staying Same Value) + (Percent Decreasing Beef Slightly * Decrease Slightly Value) + (Percent Decreasing Beef Significantly * Decrease Significantly Value) + (Percent Eliminating Beef * Elimination Value) + (Percent Never Ate Beef * Never Ate Value)) * Normal Beef Consumption * Beef Elasticity * (Average Beef Lifespan + Days of Suffering from Beef Slaughter)) + (((Percent Increasing Dairy * Increase Value) + (Percent Staying Same with Dairy * Staying Same Value) + (Percent Decreasing Dairy Slightly * Decrease Slightly Value) + (Percent Decreasing Dairy Significantly * Decrease Significantly Value) + (Percent Eliminating Dairy * Elimination Value) + (Percent Never Ate Dairy * Never Ate Value)) * Normal Dairy Consumption * Dairy Elasticity * (Average Dairy Lifespan + Days of Suffering from Dairy Slaughter)) + (((Percent Increasing Pig * Increase Value) + (Percent Staying Same with Pig * Staying Same Value) + (Percent Decreasing Pig Slightly * Decrease Slightly Value) + (Percent Decreasing Pig Significantly * Decrease Significantly Value) + (Percent Eliminating Pig * Elimination Value) + (Percent Never Ate Pig * Never Ate Value)) * Normal Pig Consumption * Pig Elasticity * (Average Pig Lifespan + Days of Suffering from Pig Slaughter)) + (((Percent Increasing Broiler Chicken * Increase Value) + (Percent Staying Same with Broiler Chicken * Staying Same Value) + (Percent Decreasing Broiler Chicken Slightly * Decrease Slightly Value) + (Percent Decreasing Broiler Chicken Significantly * Decrease Significantly Value) + (Percent Eliminating Broiler Chicken * Elimination Value) + (Percent Never Ate Broiler Chicken * Never Ate Value)) * Normal Broiler Chicken Consumption * Broiler Chicken Elasticity * (Average Broiler Chicken Lifespan + Days of Suffering from Broiler Chicken Slaughter)) + (((Percent Increasing Egg * Increase Value) + (Percent Staying Same with Egg * Staying Same Value) + (Percent Decreasing Egg Slightly * Decrease Slightly Value) + (Percent Decreasing Egg Significantly * Decrease Significantly Value) + (Percent Eliminating Egg * Elimination Value) + (Percent Never Ate Egg * Never Ate Value)) * Normal Egg Consumption * Egg Elasticity * (Average Egg Lifespan + Days of Suffering from Egg Slaughter)) + (((Percent Increasing Turkey * Increase Value) + (Percent Staying Same with Turkey * Staying Same Value) + (Percent Decreasing Turkey Slightly * Decrease Slightly Value) + (Percent Decreasing Turkey Significantly * Decrease Significantly Value) + (Percent Eliminating Turkey * Elimination Value) + (Percent Never Ate Turkey * Never Ate Value)) * Normal Turkey Consumption * Turkey Elasticity * (Average Turkey Lifespan + Days of Suffering from Turkey Slaughter)) + (((Percent Increasing Farmed Fish * Increase Value) + (Percent Staying Same with Farmed Fish * Staying Same Value) + (Percent Decreasing Farmed Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Farmed Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Farmed Fish * Elimination Value) + (Percent Never Ate Farmed Fish * Never Ate Value)) * Normal Farmed Fish Consumption * Farmed Fish Elasticity * (Average Farmed Fish Lifespan + Days of Suffering from Farmed Fish Slaughter)) + (((Percent Increasing Sea Fish * Increase Value) + (Percent Staying Same with Sea Fish * Staying Same Value) + (Percent Decreasing Sea Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Sea Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Sea Fish * Elimination Value) + (Percent Never Ate Sea Fish * Never Ate Value)) * Normal Sea Fish Consumption * Sea Fish Elasticity * Days of Suffering from Sea Fish Slaughter) * Response Bias * Desirability Bias
[7]: Feel free to check the formula for accuracy and also check to make sure the calculator implements the formula correctly.  I worry that the added accuracy from the complex calculator is outweighed by the risk that the formula is wrong.

-

Edited 18 June to correct two typos and update footnote #2.

Also cross-posted on my blog.

Why Don't People Help Others More?

34 peter_hurford 13 August 2012 11:34PM

As Peter Singer writes in his book The Life You Can Save: "[t]he world would be a much simpler place if one could bring about social change merely by making a logically consistent moral argument". Many people one encounters might agree that a social change movement is noble yet not want to do anything to promote it, or want to give more money to a charity yet refrain from doing so. Additional moralizing doesn't seem to do the trick. ...So what does?

Motivating people to altruism is relevant for the optimal philanthropy movement.  For a start on the answer, like many things, I turn to psychology. Specifically, the psychology Peter Singer catalogues in his book.

 

A Single, Identifiable Victim

One of the most well-known motivations behind helping others is a personal connection, which triggers empathy. When psychologists researching generosity paid participants to join a psychological experiment and then later gave these participants the opportunity to donate to a global poverty fighting organization Save the Children, two different kinds of information were given.

One random group of participants were told "Food shortages in Malawi are affecting more than three million children" and some additional information about how the need for donations was very strong, and these donations could help stop the food shortages.

Another random group of participants were instead shown the photo of Rokia, a seven-year-old Malawian girl who is desperately poor. The participants were told that "her life will be changed for the better by your gift".

Furthermore, a third random group of participants were shown the photo of Rokia, told about who she is and that "her life will be changed for the better", but ALSO told about the general information about the famine and told the same "food shortages [...] are affecting more than three million" -- a combination of both the previous groups.

Lastly, a fourth random group was shown the photo of Rokia, informed about her the same as the other groups, and then given information about another child, identified by name, and told that their donation would also affect this child too for the better.


It's All About the Person

Interestingly, the group who was told ONLY about Rokia gave the most money. The group who was told about both children reported feeling less overal emotion than those who only saw Rokia, and gave less money. The group who was told about both Rokia and the general famine information gave even less than that, followed by the group that only got the general famine information.1,2  It turns out that information about a single person was the most salient for creating an empathetic response to trigger a willingness to donate.1,2

This continues through additional studies. In another generosity experiment, one group of people was told that a single child needed a lifesaving medical treatment that costs $300K, and was given the opportunity to contribute towards this fund. A second random group of people was told that eight children needed a lifesaving treatment, and all of them would die unless $300K could be provided, and was given an opportunity to contribute. More people opted to donate toward the single child.3,4

This is the basis for why we're so willing to chase after lost miners or Baby Jessica no matter the monetary cost, but turn a blind eye to the mass unknown starving in the developing world.  Indeed, the person doesn't even need to be particularly identified, though it does help. In another experiment, people asked by researchers to make a donation to Habitat for Humanity were more likely to do so if they were told that the family "has been selected" rather than that they "will be selected" -- even though all other parts of the pitch were the same, and the participants got no information about who the families actually were5.


The Deliberative and The Affective

Why is this the case? Researcher Paul Slovic thinks that humans have two different processes for deciding what to do. The first is an affective system that responds to emotion, rapidly processing images and stories and generating an intuitive feeling that leads to immediate action. The second is a deliberative system that draws on reasoning, and operates on words, numbers, and abstractions, which is much slower to generate action.6

To follow up, the Rokia experiment was done again, except yet another twist was added -- there were two groups, one told only about Rokia exactly as before, and one told only the generic famine information exactly as before. Within each group, half the group took a survey designed to arouse their emotions by asking them things like "When you hear the word 'baby' how do you feel?" The other half of both groups was given emotionally neutral questions, like math puzzles.

This time, the Rokia group gave far more, but those in the group who randomly had their emotions aroused gave even more than those who heard about Rokia but had finished math problems. On the other side, those who heard the generic famine information showed no increase in donation regardless of how heightened their emotions were.1

 

Futility and Making a Difference

Imagine you're told that there are 3000 refugees at risk in a camp in Rwanda, and you could donate towards aid that would save 1500 of them. Would you do it? And how much would you donate?

Now this time imagine that you can still save 1500 refugees with the same amount of money, but the camp has 10000 refugees. In an experiment where these two scenarios were presented not as a thought experiment but as realities to two separate random groups, the group that heard of only 3000 refugees were more likely to donate, and donated larger amounts.7,8

Enter another quirk of our giving psychology, right or wrong: futility thinking. We think that if we're not making a sizable difference, it's not worth making the difference at all -- it will only be a drop in the ocean and the problem will keep raging on.

 

Am I Responsible?

People are also far less likely to help if they're with other people. In this experiment, students were invited to participate in a market research survey. However, when the researcher gave the students their questionnaire to fill out, she went into a back room separated from the office only by a curtain. A few minutes later, noises strongly suggested that she had got on a chair to get something from a high shelf, and then fell off it, loudly complaining that she couldn't feel or move her foot.

With only one student taking the survey, 70% of them stopped what they were doing and offered assistance. However, when there were two students taking the survey, this number dropped down dramatically. Most noticeably, when the group was two students -- but one of the students was a stooge who was in on it and would always not respond, the response rate of the non-stooge participant was only 7%.9

This one is known as diffusion of responsibility, better known as the bystander effect -- we help more often when we think it is our responsibility to do so, and -- again for right or for wrong -- we naturally look to others to see if they're helping before doing so ourselves.

 

What's Fair In Help?

It's clear that people value fairness, even to their own detriment. In a game called "the Ultimatum Game", one participant is given a sum of money by the researcher, say $10, and told they can split this money with an anonymous second player in any proportion they choose -- give them $10, give them $7, give them $5, give them nothing, everything is fair game. The catch is, however, the second player, after hearing of the split anonymously, gets to vote to accept it or reject it. Should the split be accepted, both players walk away with the agreed amount. But should the split be rejected, both players walk away with nothing.


A Fair Split

The economist, expecting ideally rational and perfectly self-interested players, predicts that the second player would accept any split that gets them money, since anything is better than nothing. And the first player, understanding this, would naturally offer $1 and keep $9 for himself. At no point are identities revealed, so reputation and retribution are no issue.

But the results turn out to be quite different -- the vast majority offer an equal split. Yet, when an offer comes around that offers $2 or less, it is almost always rejected, even though $2 is better than nothing.10  And this effect persists even when played for thousands of dollars and persists across nearly all cultures.


Splitting and Anchoring in Charity

This sense of fairness persists into helping as well -- people generally have a strong tendency not to want to help more than the other people around them, and if they find themselves the only ones helping on a frequent basis, they start to feel a "sucker". On the flipside, if others are doing more, they will follow suit.11,12,13

Those told the average donation to a charity nearly always tend to give that amount, even if the average told to them is a lie, having secretly been increased or decreased. And it can be replicated even without lying -- those told about an above average gift were far more likely to donate more, even attempting to match that gift.14,15  Overall, we tend to match the behavior of our reference class -- those people we identify with -- and this includes how much we help. We donate more when we believe others are donating more, and donate less when we believe others are doing so.

 

Challenging the Self-Interest Norm

But there's a way to break this cycle of futility, responsibility, and fairness -- challenge the norm by openly communicating about helping others. While many religious and secular values insist that the best giving is anonymous giving, this turns out to not always be the case. While there may be other reasons to give anonymously, don't forget the benefits of giving openly -- being open about helping inspires others to help, and can help challenge the norms of the culture.

Indeed, many organizations now exist to help challenge the norms of donations and try to create a culture where they give more. GivingWhatWeCan is a community of 230 people (including me!) who have all pledged to donate at least 10% of their income to organizations working on ending extreme poverty, and submit statements proving so. BolderGiving has a bunch of inspiring stories of over 100 people who all give at least 20% of their income, with a dozen giving over 90%! And these aren't all rich people, some of them are even ordinary students.


Who's Willing to Be Altruistic?

While people are not saints, experiments have shown that people tend to grossly overestimate how self-interested other people are -- for one example, people estimated that males would overwhelmingly favor a piece of legislation to "slash research funding to a disease that affects only women", even while -- being male -- they themselves do not support such legislation.16

This also manifests itself in an expectation that people be "self-interested" in their philanthropic cause -- suggesting much stronger support for volunteers in Students Against Drunk Driving who themselves knew people killed in drunk driving accidents versus those people who had no such personal experiences but just thought it to be "a very important cause".17

Alex de Tocqueville, echoing the early economists who expected $9/$1 splits in the Ultimatum Game, wrote in 1835 that "Americans enjoy explaining almost every act of their lives on the principle of self-interest".18  But this isn't always the case, and in challenging the norm, people make it more acceptable to be altruistic. It's not just "goody two-shoes", and it's praiseworthy to be "too charitable".

 

A Bit of a Nudge

A somewhat pressing problem in getting people to help was in organ donation -- surely no one was inconvenienced by having their organs donated after they had died. Yet, why would people not sign up?  And how could we get more people to sign up?

In Germany, only 12% of the population are registered organ donors. In nearby Austria, that number is 99.98%. Are people in Austria just less worried about what will happen to them after they die, or just that more altruistic? It turns out the answer is far more simple -- in Germany you must put yourself on the register to become a potential donor (opt-in), whereas in Austria you are a potential donor unless you object (opt-out). While people may be, for right or for wrong, worried about the fate of their body after it is dead, they appear less likely to express these reservations in opt-out systems.19

While Richard Thaler and Cass Sunstein argue in their book Nudge: Improving Decisions About Health, Wellness, and Happiness that we sometimes suck at making decisions in our own interest and all could do better with more favorable "defaults", such defaults are also pressing in helping people.

While opt-out organ donation is a huge deal, there's another similar idea -- opt-out philanthropy. Back before 2008 when the investment bank Bear Stearns still existed, Bear Stearns listed their guiding principle as philanthropy as fostering good citizenship and well-rounded individuals. To this effect, they required the top 1000 most highest paid employees to donate 4% of their salary and bonuses to non-profits, and prove it with their tax returns. This resulted in more than $45 million in donations during 2006. Many employees described the requirement as "getting themselves to do what they wanted to do anyway".

 

Conclusions

So, according to this bit of psychology, what could we do to get other people to help more, besides moralize? Well, we have five key take-aways:

(1) present these people with a single and highly identifiable victim that they can help
(2) nudge them with a default of opt-out philanthropy
(3) be more open about our willingness to be altruistic and encourage other people to help
(4) make sure people understand the average level of helping around them, and
(5) instill a responsibility to help and an understanding that doing so is not futile.

Hopefully, with these tips and more, helping people more can be come just one of those things we do.

 

References

(Note: Links are to PDF files.)

1: D. A. Small, G. Loewenstein, and P. Slovic. 2007. "Sympathy and Callousness: The Impact of Deliberative Thought on Donations to Identifiable and Statistical Victims". Organizational Behavior and Human Decision Processes 102: p143-53

2: Paul Slovic. 2007.
"If I Look at the Mass I Will Never Act: Psychic Numbing and Genocide". Judgment and Decision Making 2(2): p79-95.

3: T. Kogut and I. Ritov. 2005. "The 'Identified Victim' Effect: An Identified Group, or Just a Single Individual?". Journal of Behavioral Decision Making 18: p157-67.

4: T. Kogut and I. Ritov. 2005. "The Singularity of Identified Victims in Separate and Joint Evaluations". Organizational Behavior and Human Decision Processes 97: p106-116.

5: D. A. Small and G. Lowenstein. 2003. "Helping the Victim or Helping a Victim: Altruism and Identifiability". Journal of Risk and Uncertainty 26(1): p5-16.

6: Singer cites this from Paul Slovic, who in turn cites it from: Seymour Epstein. 1994. "Integration of the Cognitive and the Psychodynamic Unconscious". American Psychologist 49: p709-24.  Slovic refers to the affective system as "experiential" and the deliberative system as "analytic".  This is also related to Daniel Kahneman's popular book Thinking Fast and Slow.

7: D. Fetherstonhaugh, P. Slovic, S. M. Johnson, and J. Friedrich. 1997. "Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing".  Journal of Risk and Uncertainty 14: p283-300.

8: Daniel Kahneman and Amos Tversky. 1979. "Prospect Theory: An Analysis of Decision Under Risk." Econometrica 47: p263-91.

9: Bib Lantané and John Darley. 1970. The Unresponsive Bystander: Why Doesn't He Help?. New York: Appleton-Century-Crofts, p58.

10: Martin Nowak, Karen Page, and Karl Sigmund. 2000. "Fairness Versus Reason in the Ultimatum Game". Science 289: p1183-75.

11: Lee Ross and Richard E. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. Philadelphia: Temple University Press, p27-46.

12: Robert Cialdini. 2001. Influence: Science and Practice, 4th Edition. Boston: Allyn and Bacon.

13: Judith Lichtenberg. 2004. "Absence and the Unfond Heart: Why People Are Less Giving Than They Might Be". in Deen Chatterjee, ed. The Ethics of Assistance: Morality and the Distant Needy. Cambridge, UK: Cambridge University Press.

14: Jen Shang and Rachel Croson. Forthcoming. "Field Experiments in Charitable Contribution: The Impact of Social Influence on the Voluntary Provision of Public Goods". The Economic Journal.

15: Rachel Croson and Jen Shang. 2008. "The Impact of Downward Social Information on Contribution Decision". Experimental Economics 11: p221-33.

16: Dale Miller. 199. "The Norm of Self-Interest". American Psychologist 54: 1053-60.

17: Rebecca Ratner and Jennifer Clarke. Unpublished. "Negativity Conveyed to Social Actors Who Lack a Personal Connection to the Cause".

18: Alexis de Tocqueville in J.P. Mayer ed., G. Lawrence, trans. 1969. Democracy in America. Garden City, N.Y.: Anchor, p546.

19: Eric Johnson and Daniel Goldstein. 2003. "Do Defaults Save Lives?". Science 302: p1338-39.

 

(This is an updated version of an earlier draft from my blog.)

The Gift I Give Tomorrow

26 Raemon 11 January 2012 04:02AM

 

This is the final post in my Ritual Mini-Sequence. Previous posts include the Introduction, a discussion on the Value (and Danger) of Ritual, and How to Design Ritual Ceremonies that reflect your values.

 

I wrote this as a concluding essay in the Solstice ritual book. It was intended to be at least comprehensible to people who weren’t already familiar with our memes, and to communicate why I thought this was important. It builds upon themes from the ritual book, and in particular, the readings of Beyond the Reach of God and The Gift We Give to Tomorrow. Working on this essay was transformative to me - it allowed me to finally bypass my scope insensitivity and other biases, so that I could evaluate organizations like the Singularity Institute with fairness. I haven’t yet decided what to do with my charitable dollars - it’s a complex problem. But I’ve overcome my emotional restistance to the idea of fighting X-Risk.

 

I don’t know if that was due to the words themselves, or to the process I had to go through to write them, but I hope others may benefit from this.

 


 

I thought ‘The Gift We Give to Tomorrow’ was incredibly beautiful when I first read it. I actually cried. I wanted to share it with friends and family, except that work ONLY has meaning in the context of the Sequences. Practically every line is a hyperlink to an important, earlier point, and without many hours of previous reading, it just won’t have the impact. But to me, it felt like the perfect endcap to everything the Sequences covered, taking all of the facts and ideas and weaving them into a coherent, poetic narrative that left me feeling satisfied with my place in the world.


Except that... I wasn’t sure that it actually said anything.

continue reading »

Help Fund Lukeprog at SIAI

40 Eliezer_Yudkowsky 24 August 2011 07:16AM

Singularity Institute desperately needs someone who is not me who can write cognitive-science-based material. Someone smart, energetic, able to speak to popular audiences, and with an excellent command of the science. If you’ve been reading Less Wrong for the last few months, you probably just thought the same thing I did: “SIAI should hire Lukeprog!” To support Luke Muelhauser becoming a full-time Singularity Institute employee, please donate and mention Luke (e.g. “Yay for Luke!”) in the check memo or the comment field of your donation - or if you donate by a method that doesn’t allow you to leave a comment, tell Louie Helm (louie@intelligence.org) your donation was to help fund Luke.

Note that the Summer Challenge that doubles all donations will run until August 31st. (We're currently at $31,000 of $125,000.)

continue reading »

Optimal Philanthropy for Human Beings

36 lukeprog 25 July 2011 07:27AM

Summary: The psychology of charitable giving offers three pieces of advice to those who want to give charity and those who want to receive it: Enjoy the happiness that giving brings, commit future income, and realize that requesting time increases the odds of getting money.

One Saturday morning in 2009, an unknown couple walked into a diner, ate their breakfast, and paid their tab. They also paid the tab for some strangers at another table. 

And for the next five hours, dozens of customers got into the joy of giving and paid the favor forward.

This may sound like a movie, but it really happened.

But was it a fluke? Is the much-discussed link between happiness and charity real, or is it one of the 50 Great Myths of Popular Psychology invented to sell books that compete with The Secret?

Several studies suggest that giving does bring happiness. One study found that asking people to commit random acts of kindness can increase their happiness for weeks.1 And at the neurological level, giving money to charity activates the reward centers of the brain, the same ones activated by everything from cocaine to great art to an attractive face.2

Another study randomly assigned participants to spend money either on themselves or on others. As predicted, those who spent money helping others were happier at the end of the day.3

Other studies confirm that just as giving brings happiness, happiness brings giving. A 1972 study showed that people are more likely to help others if they have recently been put in a good mood by receiving a cookie or finding a dime left in a payphone.4 People are also more likely to help after they read something pleasant,5 or when they are made to feel competent at something.6

In fact, deriving happiness from giving may be a human universal.7 Data from 136 countries shows that spending money to help others is correlated with happiness.8

But correlation does not imply causation. To test for causation, researchers randomly assigned participants from two very different cultures (Canada and Uganda) to write about a time when they had spent money on themselves (personal spending) or others (prosocial spending). Participants were asked to report the happiness levels before and after the writing exercise. As predicted, those who wrote (and thought) about a time when they had engaged in prosocial spending saw greater increases in happiness than those who wrote about a time when they spent money on themselves.

So does happiness run in a circular motion?

This, too, has been tested. In one study,9 researchers asked each subject to describe the last time they spent either $20 or $100 on themselves or on someone else. Next, researchers had each participant report their level of happiness, and then predict which future spending behavior ($5 or $20, on themselves or others) would make them happiest.

Subjects assigned to recall prosocial spending reported being happier than those assigned to recall personal spending. Moreover, this reported happiness predicted the future spending choice, but neither the purchase amount nor the purchasing target (oneself or others) did. So happiness and giving do seem to reinforce each other.

So, should charities remind people that donating will make them happy?

This, alas, has not been tested. But for now we might guess that just as people generally do things they believe will make them happier, they will probably give more if persuaded by the (ample) evidence that generosity brings happiness.

Lessons for optimal philanthropists: Read the studies showing that giving brings happiness. (Check the footnotes below.) Pick out an optimal charity in advance, notice when you're happy, and decide to give them money right then.

Lessons for optimal charities: Teach your donors how to be happy. Remind them that generosity begets happiness.

continue reading »

Safety Culture and the Marginal Effect of a Dollar

23 jimrandomh 09 June 2011 03:59AM

We spent an evening at last week's Rationality Minicamp brainstorming strategies for reducing existential risk from Unfriendly AI, and for estimating their marginal benefit-per-dollar. To summarize the issue briefly, there is a lot of research into artificial general intelligence (AGI) going on, but very few AI researchers take safety seriously; if someone succeeds in making an AGI, but they don't take safety seriously or they aren't careful enough, then it might become very powerful very quickly and be a threat to humanity. The best way to prevent this from happening is to promote a safety culture - that is, to convince as many artificial intelligence researchers as possible to think about safety so that if they make a breakthrough, they won't do something stupid.

We came up with a concrete (albeit greatly oversimplified) model which suggests that the marginal reduction in existential risk per dollar, when pursuing this strategy, is extremely high. The model is this: assume that if an AI is created, it's because one researcher, chosen at random from the pool of all researchers, has the key insight; and humanity survives if and only if that researcher is careful and takes safety seriously. In this model, the goal is to convince as many researchers as possible to take safety seriously. So the question is: how many researchers can we convince, per dollar? Some people are very easy to convince - some blog posts are enough. Those people are convinced already. Some people are very hard to convince - they won't take safety seriously unless someone who really cares about it will be their friend for years. In between, there are a lot of people who are currently unconvinced, but would be convinced if there were lots of good research papers about safety in machine learning and computer science journals, by lots of different authors.

Right now, those articles don't exist; we need to write them. And it turns out that neither the Singularity Institute nor any other organization has the resources - staff, expertise, and money to hire grad students - to produce very much research or to substantially alter the research culture. We are very far from the realm of diminishing returns. Let's make this model quantitative.

Let A be the probability that an AI will be created; let R the fraction of researchers that would be convinced to take safety seriously if there were a 100 good papers in about it in the right journals; and let C be the cost of one really good research paper. Then the marginal reduction in existential risk per dollar is A*R/100*C. The total cost of a grad student-year (including recruiting, management and other expenses) is about $100k. Estimate a 10% current AI risk, and estimate that 30% of researchers currently don't take safety seriously but would be convinced. That gives is a marginal existential risk reduction per dollar of 0.1*0.3/100*100k = 3*10^-9. Counting only the ~7 billion people alive today, and not any of the people who will be born in the future, this comes to a little over two expected lives saved per dollar.

That's huge. Enormous. So enormous that I'm instantly suspicious of the model, actually, so let's take note of some of the things it leaves out. First, the "one researcher at random determines the fate of humanity" part glosses over the fact that research is done in groups; but it's not clear whether adding in this detail should make us adjust the estimate up or down. It ignores all the time we have between now and the creation of the first AI, during which a safety culture might arise without intervention; but it's also easier to influence the culture now, while the field is still young, rather than later. In order for promoting AI research safety to not be an extraordinarily good deal for philanthropists, there would have to be at least an additional 10^3 penalty somewhere, and I can't find one.

As a result of this calculation, I will be thinking and writing about AI safety, attempting to convince others of its importance, and, in the moderately probable event that I become very rich, donating money to the SIAI so that they can pay others to do the same.

Singularity Institute featured on Philanthroper

12 Louie 01 April 2011 05:46AM

Singularity Institute is today's featured charity on Philanthroper.com

Philanthroper is a micro-giving site that profiles small charities hand-selected by their editors. Their site encourages donors to “give every day” by only requesting $1 contributions.

A group of Singularity Institute donors has stepped forward to match all donations given through Philanthroper today so I'd encourage each of you to give $1 now if you support Singularity Institute and have a US-based bank account (Philanthroper requirement). We'd like to have a healthy total raised by the end of the day. The fundraiser has already been featured on Gizmodo but please submit it to other news sites if you can.

I signed up and gave my $1.

View more: Next