(Cross-posted in The Life You Can Save blog, the Intentional Insights blog, and the EA Forum)

Maximizing Donations to Effective Charities

Image credit: https://www.flickr.com/photos/61423903@N06/7382239368

Don’t you want your charitable efforts to do the utmost good they can in the world? Imagine how great it feels to know that you’re making the most difference with your gift. Yet how do you figure out how to bring about this outcome? Maximizing the impact of your money requires being intentional and strategic with your giving.

Let me share my personal perspective on giving intentionally. I am really passionate about using an evidence-based approach to do the most good with my donations. I take the time to research causes so that my money and time go to the best charities possible. Moreover, I have taken the Giving What We Can and The Life You Can Save pledges to dedicate a sizable chunk of my income to effective charities. It felt great to take those pledges, and to commit publicly to effective giving throughout my life.

I am proud to identify as an effective altruist: a member of a movement dedicated to combining the heart and the head, using research and science to empower the urges of my emotional desire to make the world a better place. I pay close attention to data-driven charity evaluators such as GiveWell. The large majority of effective altruists closely follow its guidance. GiveWell focuses on charities that do direct activities to improve human life and alleviating suffering and have a clearly measurable impact. One example is Against Malaria Foundation (AMF), one of The Life You Can Save's recommended charities and one of four of GiveWell’s top choice charities for 2015.

Yet while I give to AMF, it and other highly effective charities represent only a small fraction of my donated time and money. This might sound surprising coming from an effective altruist. Why don’t I conform to the standard practice of most effective altruists and donate all of my money and time to these effective, research-based, well-proven charities?

First, let me say that I agree with most effective altruists that reducing poverty via highly effective charities that work directly on poverty alleviation is very worthwhile, and I do make donations to highly effective charities. In fact, this morning I donated enough money for AMF to buy two mosquito bed nets to protect families from malaria-carrying mosquitoes. I certainly got positive feelings from knowing that my gift will go directly towards saving lives, and have a very clear and measurable impact on the world

Yet when I make large or systematic contributions of money and time, effective charity is not where I give. I don't think donating to these direct-action charities is the best use of my own money and time. After all, my goal is to save lives and maximize cash flow to effective charities, whether or not I’m personally giving money to effective organizations. To evaluate the impulses coming from my heart to ensure that my actions match my actual goals, I take the time to sit down and consult my head.

http://stephenpoff.com/blog/wp-content/uploads/2010/01/Hamlet-Skull-profile-lo-res.jpg

(image credit)

I use rational decision-making strategies such as a MAUT analysis to evaluate where my giving would make the most difference in getting resources to effective charities. As a result, I have spent the majority of my money and time on higher-level, strategic giving that channels other people’s donations towards more effective charities, facilitating many more donations to them than I alone could provide.

This is why I am passionate about contributing to the kind of projects that spread widely the message of effective giving. Doing so doesn’t necessarily involve getting other people to become part of the effective altruist movement. Instead, it prompts them to adapt effective giving strategies such as taking the time to list their goals for giving, consider their giving budgets, research the best charities, and use data-based charity evaluators to choose the optimal charity that matches their giving goals.

What does spreading these messages involve? Since there are few organizations devoted to spreading effective giving strategies to a broad audience, I decided to practice charity entrepreneurship. Together with my wife, I co-founded a 501(c)(3) non-profit organization devoted to spreading effective giving and rational thinking strategies, Intentional Insights (InIn). InIn creates content devoted to this purpose, for example this article about how I became an effective altruist in the first place. Such articles, published in the organization’s own blog and places such as the Huffington Post and Lifehack, are shared widely online, and reach hundreds of thousands of readers. For example, a recent Lifehack article I published was shared over 2.5 thousand times on social media. A widely-used general estimate is that for every 1 person who shares an article, 100 people read an article thoroughly, while many more skim it. Plenty of people follow the links back to the organizations mentioned in the articles I publish, including organizations devoted to effective giving when the article deals with that topic.

For some, a single article that makes a strong enough case is sufficient to sway their thinking. For example, I published an article in The Huffington Post that combines an engaging narrative, emotions, and a rational argument to promote giving to effective charities as opposed to ineffective ones. This article explicitly highlighted the benefits of Against Malaria Foundation, GiveWell's top choice for 2015. On a higher, meta level, it encouraged giving to effective charities, and using GiveWell and The Life You Can Save, including its Impact Calculator, to make decisions about giving. I also want to express gratitude to Elo and others who helped give suggestions to improve my writing in the future regarding this specific article.

Despite these opportunities for improvement, as you'll see from this Facebook comment on my personal page, it helped convince someone to decide to donate to an effective charity. Furthermore, this comment is someone who is the leader of a large secular group in Houston, and he thus has an impact on a number of other people. Since people rarely make actual comments, and far from all are fans of my Facebook page, we can estimate that many more made similar decisions but chose not to comment about it. In fact, the article was shared quite widely on social media, so it made quite some impact, and still going - the StumbleUpon clicks went from 50ish to over 1K in the last couple of days, for example.

 However, others need more than a single article. I place myself in that number - I generally want significant exposure to ideas and shift my mind gradually. Or perhaps the initial articles I read did not make a strong enough case. In any event, like many others, I first discovered the idea of effective giving through an article, and followed the breadcrumbs in the links to GiveWell, Giving What We Can, The Life You Can Save, and other similar organizations. I was then intrigued enough to go to a presentation about it given by Max Harms. While already oriented toward effective giving by my previous reading, the presentation sold me on effective altruism as a movement. Presentations give people a direct opportunity to engage with and consider in-depth the big questions surrounding effective giving. This is why I devote my time and money not only into writing articles, but also into promoting effective altruist-themed presentations.

For example, I am collaborating with Jon Behar from The Life You Can Save to spread Giving Games. This participatory presentation educates the audience about effective giving by providing all participants with a pool of actual money, $10 per attendee, and has them discuss fundamental questions about where to give that money. In the course of the Giving Game, participants explore their values and motivations for donations, what kind of evidence they should use to evaluate charities, and how to avoid thinking errors in their giving. After the discussion, the group votes which charity should get the money. The Life You Can Save then donates that money on behalf of the group.

InIn has strong connections with reason-oriented organizations due to our focus on spreading rational thinking, and is partnering with The Life You Can Save to bring the Giving Game to these organizations, starting with the Secular Student Alliance (SSA). The SSA is an international organization uniting hundreds of reason-oriented student clubs around the world, but mainly in the United States. I proposed the idea to August Brunsman, the Executive Director of the SSA and a member of the InIn Advisory Board. He himself is passionate about promoting social justice, but had little familiarity with Effective Altruism. I told him more about Effective Altruism and the Giving Game model, and he and other SSA staff members decided to approve the event. Together, InIn and The Life You Can Save created a Giving Game event specifically targeted to SSA clubs, and the SSA is now actively promoting the Giving Game to its members.

I am delighted with this outcome. As a former member President of a SSA club, I can attest that my past self would have been very eager to host this type of event. Looking back, I would have greatly benefitted from taking the time to sit down, discuss, and reflect seriously on my giving in a context where my decision had real-world consequences. This is the type of activity that would have strongly impacted my thinking and behavior around donations going forward. The Life You Can Save has dedicated $10,000 to its initial pilot program for SSA members, and has promised to fundraise more if it works out, but at least 1000 students will participate in these games as a result of the collaboration between InIn and The Life You Can Save.

How much impact will this have on the world? I cannot say for sure. I do not have the kind of carefully defined measures of impact that GiveWell can provide for direct-action charities. Indeed, it is really difficult to measure the actual impact of any marketing efforts. The best we can do is to build chains of evidence. For example, this article that suggests a powerful long-term impact of donations to support Giving Games. Such estimates apply more broadly to contributions that promote effective giving to the public.

Sure, it is hard to know for sure the exact effects that my efforts spreading the message of effective giving has on the world. Yet when I sit down and think about it, and make my decisions rationally, I am very happy to dedicate my large donations, my monthly giving, as well as my systematic volunteer efforts to publicizing the message of effective giving. While it does not get me the same warm feelings as giving to direct action charities, when I use my head to direct my heart I realize that sponsoring such activities makes the most difference to maximizing donations to effective charities.

New Comment
29 comments, sorted by Click to highlight new comments since:
[-]Elo60

Don’t you want your charitable efforts to do the utmost good they can in the world?

I suspect this is going to put people on the back foot. (as in - a defensive position. Making it hard to take in the rest of the article) Especially as its a rhetorical question coming from the negative.
Try:
"So you want your charitable efforts to do the most good they can in the world?"

imagine

I know plenty of people for which this word is a turn-off if not used carefully. knowing as well that some people don't imagine like other people means you need to be quite careful with the placement of the word imagine.

my personal perspective

sounds like it's purposely weakening the quality of the source. "it's just me saying this"
remove "personal"

really passionate

remove "really"

I have taken the... pledges

there's an implicit boast in this statement; and an accusation towards the reader as to why they have not done it yet. That's not a good attitude to be spreading, and probably won't motivate people.

identify as...

keep your identity small ... identifying is something we shouldn't recommend.

four of GiveWell’s top choice charities for 2015.

Top for what? include the relevant information. Without the information; it's more fluff than value sharing.

A widely-used general estimate is that for every 1 person who shares an article, 100 people read an article thoroughly, while many more skim it.

source?

This article could do with a number of improvements...

Thank you, this is really great stuff! I will make improvements in the article in the venue I have control over, Intentional Insights. I will aim to improve my writing more broadly for being more positively oriented, giving more clear sourcing, and being more clearly oriented toward motivating people more effectively. Thanks!

I don't have a lot of time so this comment will be rather short and largely insufficient at fully addressing your post. That said, I tend to side with the idea presented in this article: http://www.nickbostrom.com/astronomical/waste.html

Essentially, I fail to see how anything other than advancing technology at the present could be the most effective route. How would you defend your claims of effective charity against the idea that advancing technology and minimizing existential risks instead of giving to those currently in need are ultimately the most effective ways for humans to raise utility long-term?

EDIT: I suppose it would be worth noting here that I have a fairly specific value set in place already. Basically, I favor a specific view of Utilitarianism that has three component values I've decided (and would argue) are each important: Intelligence, happiness, and security. In my thinking these three form a sort of triangle, with intelligence [and knowledge] leading to "higher happiness" and allowing for a "higher security" (intentionally adapting to threats), while also be intrinsically valuable. Security in a general sense basically meaning the ability to resist threats and exist for an extended time, bolstering happiness and knowledge by preserving them for extended periods of time. Happiness, of course, is the typical utilitarian ideal, this is inherently good. And as previously mentioned knowledge allows higher level happiness and security allows prolonged happiness.

Given this model, or a more standard model as I don't have time to fully articulate my idea, the charities you listed seem to be somewhat ineffective compared to other more direct attempts at increasing security and knowledge, which I would argue are the two values which we should currently be focused on increasing even at the cost of present-day happiness.

Not to diminsh what you're doing, as it is still much better than not giving anything at all or giving to less effective charities given your goal. More so to convince me to donate to these charities instead of otherwise using my money.

This depends on what your ideas are regarding effective charities. For example, you can consider MIRI getting money to be the optimal outcome. In that case, is it better for you to give to MIRI directly, or for you to give to a meta-charity that persuades others to give to MIRI? My point in the article is that meta-charities are a better return on investment for rational donors than direct-action charities, such as MIRI, which directly does the research itself. On the other hand, one aspect of the work of Intentional Insights is to encourage people to give money to MIRI and other organizations mitigating existential risk.

I agree to some extent, depending on how efficient advertising for a specific charity through a meta-charity is. I see what you're saying now after re-reading it, to be honest I had only very briefly skimmed it last night/morning. Curious, do have any stats on how effective Intentional Insights is at gathering more money for these other charities than is given to them directly?

Also, how does In In decide whether something is mitigating existential risk? I'm not overly familiar with the topic but donations to "Against Malaria Foundation" and others mentioned don't sound like the specific sort of charity I'm mostly interested in.

Yup, both good questions.

For the answer to the first - about effectiveness - see the two paragraphs from the paragraph starting with "For some." It's pretty hard to measure exact impact of marketing dollars, so the best equivalent is the combination of how widely read an article is, with specific evidence of its impact on an individual, a combination of quantitative and qualitative approaches. Thus, we can see that this article was widely shared, over 1K times, which means it was likely read by over 100K people. Moreover, the article is clearly impactful, as we can see from the specific comment of the person who was impacted, and his sway with others in his role as group leader. We can't see the large numbers of people who were impacted but chose not to respond, of course.

For the answer to the second, donations to AMF don't do that much to mitigate existential risk. However, getting people turned to Effective Altruism does, since then they become familiar with the topic of existential risk, which occupies a lot of attention, including MIRI among effective altruists.

The problem with selling existential risk to the broad audience is that honestly, they generally don't buy it. It's hard for them to connect emotionally to AI and other existential risk issues. Much easier to connect emotionally to GiveWell, etc. However, once they get into Effective Altruism, they learn about existential risk, and are more oriented toward donating to MIRI, etc.

This is the benefit of being strategic and long-term oriented - rational - about donating to InIn. Getting more people engaged with these issues will result in more good than one's own direct donations to MIRI, I think. But obviously, that's my perspective, otherwise I wouldn't have started InIn and would have just donated directly to MIRI and other causes that I held important. It's up to you to evaluate the evidence. One path that many donors who give to InIn choose to do is to spread your donations, giving some to InIn and some to MIRI. It's up to you.

I'm in the middle of writing an essay due tomorrow morning so pardon the slightly off topic and short reply (I'll get back to you on the other matters later) but I am particularly curious about one topic that comes up here a lot, as far as I can tell, on discussions of existential risk. The topic is the AI and its relations to existential risk. By the sounds of it I may hold an extremely unpopular opinion, while I acknowledge that the AI could pose an existential risk, my personal ideas (which I don't have the time to discuss here or the points required to make a full post on the subject matter)is that an AI is probably our best bet at mitigating existential risk and maximizing the utility, security, and knowledge I previously mentioned. Does that put me at odds with the general consensus on the issue here?

I wouldn't say your opinion is at odds with many here. Many hold unfriendly AI to be the biggest existential risk, and friendly AI to be the best bet at mitigating existential risk. I think so as well. My personal opinion, based on my knowledge of the situation, is that real AI is at least 50 years off, and more likely on the scale of a century or more. We are facing much bigger short and medium-term existential risks, such as nuclear war, environmental disaster, etc. Helping people become more rational, which is the point of Intentional Insights, mitigates short, medium, and long-term existential risks alike :-)

We are facing much bigger short and medium-term existential risks, such as nuclear war, environmental disaster, etc.

Do we, now? Tell me about the short-term existential risk of an environmental disaster.

I stated short and medium-term risks in that sentence. I have a 98% probability confidence that you are more than smart enough to understand that short-term risk applies to things like nuclear war more than environmental catastrophe, and are just trying to be annoying with your comment.

You're badly calibrated :-P

OK, tell me about the medium-term existential risk of an environmental disaster.

Lol, thanks for the calibration warning.

Not interesting in discussions of environmental disasters. I've been reading way too much about this with the new climate accord to want to have an LW-style discussion about it. I think we can both agree that there is significant likelihood of problems, such as major flooding of low-lying areas, in the next 20-30 years.

I think we can both agree that there is significant likelihood of problems, such as major flooding of low-lying areas, in the next 20-30 years.

There were floods in the past that produced damage and likely some in the future but why do you believe it's an Xrisk?

I think floods would only be one type of problem from climate change. Other would be extreme weather, such as hurricanes, tornadoes, etc. These would be quite destabilizing for a number of governments, and contribute to social unrest, which has unanticipated consequences. Even worse, at some point, we can face abrupt climate change.

Now, this is all probabilistic, and I'm not saying it will necessarily happen, but this is a super-short version of why I consider climate change an X-risk.

We are facing much bigger short and medium-term existential risks

...magically transforms into...

there is significant likelihood of problems

Heh. So, "the sky is falling!" means "a chance of rain on Monday"?

I just gave one example of the kind of environmental problem quite likely to occur within the medium-term. There are many others. Like I said, not interested in discussing these :-)

Millions of Bangladeshis having to relocate (or build dykes) would indeed be a problem, but hardly an existential risk in the LWian sense of the term.

I replied to this point here

I think we can both agree that there is significant likelihood of problems, such as major flooding of low-lying areas, in the next 20-30 years.

This is so nostalgic, this was what the GW alarmists were saying 20 years ago.

You still haven't taken up the bet that you said you would

Do we, now? Tell me about the short-term existential risk of an environmental disaster.

Yellowstone, strong solar flares and astroid impacts?

Solar flares may be pretty bad now that we are so reliant on the power grid but they hardly are an existential risk, Yellowstone erupts about once every 800,000 years in average which is hardly short-term, and asteroid impacts large enough to worry about are even rarer than that.

Rarity of events doesn't mean they can't happen in the short term.

It doesn't mean that they can't happen as in "probability equals zero", but it does mean that the probability that they happen in any given decade is pretty much negligible.

Whether a probability is negligible dependends on the impact of an event and not only it's probability.

Well, for that matter it also depends on what you can do about it, and I have no idea how we would go about preventing Yellowstone from erupting.

[-]gjm10

We might be able to reduce the harm it did, even if we couldn't stop it erupting.

Well, for that matter it also depends on what you can do about it, and I have no idea how we would go about preventing Yellowstone from erupting.

I remember a proposal about cooling down Yellowstone by putting a lake on top of it.

If you spent more money you can ram carbon nanofiber rods deep into the ground. If the rod is thick enough the lave shouldn't do much damage to it and it can very effectively transport temperature to the top. Maybe you get even electricity as a bonus for cooling down Yellowstone so the project would pay for itself.

Sigh. Don't we have a bragging thread for so much self-congratulation and marvelling at one's awesomeness?