Edit: Carl Shulman made some remarks that have caused me to seriously question the soundness of the final section of this post. More on this at the end of the post.

Consider the following two approaches to philanthropy:

The “local” approach (associated with "satisficing") is to consider those philanthropic opportunities that are "close to oneself" in some sense (immediately salient, within one's own communities, in one's domains of expertise). The “global” approach (associated with "maximizing") is to survey the philanthropic landscape, search for the best philanthropic opportunities in absolute terms, and devote oneself to those.

In practice nobody's approach to philanthropy is genuinely entirely global; one is necessarily limited by the fact that the range of possibilities that are salient to oneself is smaller than the total range of possibilities and by the fact that one has limited computational power. But there is nevertheless substantial variation in the degree to which individuals' approaches to philanthropy are global or local.

Here I'll compare the pros and cons of the local approach and the global approach and attempt to arrive at some sort of synthesis of the two.

Disclosure: I volunteer for GiveWell and may work for them in the future.

In favor of the local approach to philanthropy:

1. One problem with the global approach to philanthropy is that it often leads to reasoning about and/or designing approaches to  problems in domains that one knows little about. According to a June 2009 GiveWell blog entry:

One of the consistent refrains we’ve seen in aid literature is the importance of local participation/enthusiasm/ownership for aid projects. Many programs have been criticized for being too “top-down” (i.e., imposing outsiders’ designs on local communities), with the implication that more “bottom-up” programs (i.e., getting local people to participate in the design of execution of programs) would be more likely to create real and lasting change.

All else being equal, one has a better chance of succeeding at a philanthropic effort in a domain where one has expertise and familiarity with the relevant issues than one does in a domain where this is not the case. Moreover, the prospect of doing more harm than good are heightened in the latter case.

2. The search time involved in the local approach is much smaller than the search time involved in the global approach. In the local approach one simply makes a choice out of the salient options. In the global approach one is led to sort through millions of philanthropic efforts that one can engage in and attempt to gauge their efficacy against one another. Whatever time one spends searching could be spent directly pursuing a philanthropic effort, so there is an opportunity cost to the global approach.

3. As referenced in the introduction, humans lack the knowledge and computational resources necessary to do a good job at maximizing globally in a domain as a complicated as philanthropy. In the limit, as one's understanding of the world as a system goes to zero, one's philanthropic effort have a tendency to become randomly distributed in the space of expected utilities. By way of contrast it seems likely that assuming a Solomonoff prior, if everybody did what looked like a good idea locally then even in the absence of understanding world as a whole, the collected expected utility of their actions would be positive.

4. Under certain assumptions the local approach solves coordination problems that the global approach leaves unsolved. If everybody works in their respective corner to make the world a better place then they are likely to notice with the other people who are working on similar efforts and communicate with them and so are better able to coordinate to reduce duplicated labor. If one adopts the global approach to philanthropy then one is more likely to be working on problems where one doesn't know the other parties that are working on it and this can lead to duplicated labor. For an extreme example, suppose that there are a million people and for each i, the i'th person can generate 1 util by completing task A_i or complete task B which generates 100 utils. If each person tries to complete task B and it only needs to be done once then the total utility of their actions is 102 rather than the 106 that it could have been had they been optimizing locally.

In the for-profit world the coordination problem is solved by the market mechanism; one should not forget that there is no natural mechanism for solving the coordination problem in the non-profit world.

5. One tends to get more personal satisfaction from working on a cause that one knows well and feels good about working on than from a cause which is quite removed from one's domains of expertise. This comes partially from greater certainty of the value of one's work and partially from that other kind of status - the confidence that comes from feeling knowledgeable. Deriving personal satisfaction from working on a philanthropic cause allows one to devote more to it than one otherwise would be able to; I would guess that in some cases the additional amount that one can put into the cause is more than an order of magnitude more. This is in some measure true for some people even if one is donating funds and is not directly involved in a given project.

In favor of the global approach to philanthropy:

The argument in favor of global philanthropy is that philanthropic efforts differ in cost-effectiveness by many orders of magnitude. For example, in your dollar goes further overseas the GiveWell staff wrote:

We understand the sentiment that "charity starts at home," and we used to agree with it, until we learned just how different U.S. charity is from charity aimed at the poorest people in the world.

and gives some discussion together with a rough comparison of cost-effectiveness of international health aid with certain domestic efforts. Though I have (marginally) more familiarity with underprivileged inner city youth than with life in sub-Saharan Africa, it seems that despite the fact that I don't know about life in sub-Saharan Africa and despite the problems with the global approach to philanthropy that I mentioned above, it's still more cost-effective to donate to international health aid efforts than to donate improve the lives of underprivileged inner city youth because the Poor in the US = Rich (relatively speaking) and so at the margin sub-Saharan Africans can benefit far more from philanthropy than the poor in the US.

A domain where the potential upside is many orders of magnitude greater than in the domain of international health aid is that of existential risk reduction. Here I have much less familiarity with the issues than I do with international health aid efforts because there has been less research into international aid than there has been in existential risk reduction and because the causal web influencing existential risk is very complex. Nevertheless, despite the problems with a global approach to philanthropy that I mentioned above, I feel that the potential upside of averting existential risk is sufficiently great so that existential risk reduction is the cause that deserves my utilon-oriented philanthropic efforts.

A local approach within a global approach to philanthropy

In a recent comment Carl Shulman wrote

Fermi calculations like the one in your post can easily show the strong total utilitarian case (that is, the case within the particular framework of total utilitarianism) for focus on existential risk, but showing that a highly specific "course of action" addressing a specific risk is better than alternative ways to reduce existential risk is not robust to shifts of many orders of magnitude in probability.

My intuition here is the same as Carl's. Combining the points in the above two sections of this post I would propose as a heuristic that out of philanthropic courses of action that plausibly reduce existential risk, one should pursue those that one is most knowledgeable about, interested in, and competent to address.

This is only a heuristic; one can easily imagine that the cost-effectiveness of one existential risk reduction effort decisively swamps that of another by enough orders of magnitude so that the considerations mentioned in the In favor of the local approach to philanthropy are rendered moot. If this is the case then it would be natural to apply the heuristic to the collection of interventions that are within a few orders of magnitude of one another rather than existential risk reduction efforts in general.

Putting the considerations under the heading In favor of the local approach to philanthropy together with the idea that there seem to be a number of existential risk reduction opportunities within a few orders of magnitude of the one with highest expected value pushes in the direction of a local approach to philanthropy within a cluster of existential risk reduction efforts.

I presently believe that the cluster of existential risk efforts that have well-considered expected value which is within a few orders of magnitude of the one that appears to have the highest expected value is larger than it may initially seem. There are often large error bars surrounding the expected value of a given intervention on account of the model uncertainty leading to assessment of the expected value of the intervention varying heavily with new data. Averaging over such variations can place interventions of initially seemingly different cost-effectiveness in a comparable range. I will have more to say about this in a future post.


Edit: Carl suggests that the factors mentioned in In favor of the local approach to philanthropy are not sufficiently strong to shift one's philanthropic focus from one existential risk reduction method to another on altruistic grounds when the apparent expected values of the two courses of action are more than an order of magnitude different. Certainly the largest order of magnitude of existential risk reduction that one is able to affect swamps all other orders of magnitude so that one should try to get the order of magnitude right if possible.

So the question at stake is really whether the effect of "averaging over such variations" mentioned in the previous paragraph has a sufficiently strong dampening effect to place distinct existential risk efforts within an order of magnitude of cost-effectiveness of one another.

Here a relevant point is that many of the factors influencing the cost-effectiveness of existential risk reduction efforts are shared across risks. Some examples of such factors are the rate of response of technology to influx of money and talent, international governmental relationships, the likely response of politicians to existential risks due to new technology and the quality of ability to gather information. The correlation between the uncertainties across risks cuts down on the uncertainty involved when one compares cost-effectiveness estimate to another (the uncertainty involved in assessing the relative magnitudes of cost-effectiveness estimates is lower than the uncertainty involved in assessing the absolute magnitudes of cost-effectiveness estimates.) This pushes in the direction of one being able to accurately assess relative orders of magnitude and correspondingly discern which efforts are cost-effective with enough confidence so that the dampening effect mentioned above is small.

I have yet to digest these points but if they're sufficiently strong, they would have the effect of shifting the conclusion of my post to "despite the advantages of the local approach to philanthropy over the global approach to philanthropy, in the domain of existential risk reduction the global approach to philanthropy wins out."

Beyond that, the very strong desirability of correctly pinning down the orders of magnitude of cost-effectiveness estimates correctly points toward the high value of information. Quoting Nick Bostrom's Existential Risk FAQ:

Research into existential risk and analysis of potential countermeasures is a very strong candidate for being the currently most cost-effective way to reduce existential risk.  This includes research into some methodological problems and into certain strategic questions that pertain to existential risk.  Similarly, actions that contribute indirectly to producing more high-quality analysis on existential risk and a capacity later to act on the result of such analysis could also be extremely cost-effective.  This includes, for example, donating money to existential risk research, supporting organizations and networks that engage in fundraising for existential risks work, and promoting wider awareness of the topic and its importance.

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 5:21 PM

Here I have much less familiarity with the issues than I do with international health aid efforts because there has been less research into international aid than there has been in existential risk reduction

Did you get this backward? - isn't there more research into int'l aid than x-risk?

My concern with donating to SIAI in particular has been that I don't have a clear understanding of how to measure potential success. I agree with the general cause, and that if my dollars can realistically and provably help create Friendly AI that that'd be great, but it's not clear to me that Friendly AI is actually possible, or that the SIAI project is the best way to go about it. And frankly, the math and other facts involved are beyond my ability to really understand

This concern was cemented when I read the recent interview between SIAI (Eliezer in particular? Not sure) and Holder, and Holder asked "what do you have to say to people who are deciding whether to donate? How much effect would their dollars actually have, and isn't this just Pascal's Mugging?" and the answer was essentially "Right now we aren't necessarily looking to expand and it doesn't necessarily make sense to donate here if you aren't involved already. Others may invoke Pascal's Mugging on our behalf, but we don't advocate that argument."

I'm also biased in favor of human-centric (as opposed to trans/posthuman) solutions to current global problems. This is mostly because, well, I'm human and I like it that way. I feel like if humanity can't solve its own problems with its current level intelligence, it's because we were too lazy, not because we weren't smart enough. This may change over the next decades as transhuman ideals become more concrete and less weird and scary to me. I don't really defend this position - it's based entirely on irrational pride - but haven't quite found enough reasoning to abandon it.

For now, the question I'm left with is - what OTHER existential risks are out there, how cost effective are they to fix, and do we have an adequate metric to judge our success?

For now, the question I'm left with is - what OTHER existential risks are out there, how cost effective are they to fix, and do we have an adequate metric to judge our success?

The edited volume Global Catastrophic Risks addresses this question. It's far more extensive than Nick Bostrom's initial Existential Risks paper and provides a list of further reading after each chapter.

Here are some of the covered risks:

  • Astro-physical processes such as the stellar lifecycle
  • Human evolution
  • Super-volcanism
  • Comets and asteroids
  • Supernovae, gamma-ray bursts, solar flares, and cosmic rays
  • Climate change
  • Plagues and pandemics
  • Artificial Intelligence
  • Physics disasters
  • Social collapse
  • Nuclear war
  • Biotechnology
  • Nanotechnology
  • Totalitarianism

The book also has many chapters discussing the analysis of risk, risks and insurance, prophesies of doom in popular narratives, cognitive biases relating to risk, selection effects, and public policy.

Physics disasters

What's physics disasters?

What's physics disasters?

Breakdown of the vacuum state, conversion of matter into strangelets, mini black-holes, and other things which people fear from a particle accelerator like the LHC. It boils down to, "Physics is weird, and we might find some way of killing ourselves by messing with it."

What is the risk from Human Evolution? Maybe I should just buy the book...

It's well-written, though depressing, if you take "only black holes will remain in 10^45 years" as depressing news.

Evolution is not a forward-looking algorithm, so humans could evolve in dangerous, retrograde ways, and thus extinct what we currently consider valuable about ourselves, or even the species itself should it become too dependent on current conditions.

I'm also biased in favor of human-centric (as opposed to trans/posthuman) solutions to current global problems. This is mostly because, well, I'm human and I like it that way. I feel like if humanity can't solve its own problems with its current level intelligence, it's because we were too lazy, not because we weren't smart enough.

I'm curious to unpack this a bit. I have a couple of conflicting interpretations of what you might be getting at here; could you clarify?

At first, it sounded to me as if you were saying that you consider intelligence increase to be "transhuman", but laziness reduction (diligence increase?) to be not "transhuman". Which made me wonder, why the distinction?

Then, I thought you might be saying that laziness/diligence is morally significant to you, while intelligence increase is not morally significant. In other words, if humanity fails because we are lazy, we deserved to fail.

Am I totally misreading you? I suspect I am, at least in one of the above interpretations.

I haven't unpacked the value/bias for myself yet, and I'm pretty sure at least part of it is inconsistent with my other values.

I'm not necessarily morally opposed to artificial (i.e. drugs or cybernetic) intelligence OR diligence enhancements. But I would be disappointed if turned out that humanity NEEDED such enhancements in order to fix its own problems.

I believe that diligence is something that can be taught, without changing anything fundamental about human nature.

Why not a combination of approaches where you do the global approach for a while in order to figure out which domains are the most important and then become a domain expert in those?

If your global exploration suggested all domains were approximately equivalent, you could just switch to whatever you already have local expertise in, as you suggest.

I suppose, then, that the most utility could be gained by donating time to local projects in your area of interest -- tutoring kids, red cross, what have you, and money to global projects. It doesn't take a lot of expertise to send $10 to starving kids in Africa, or what have you. You decide that Sub-Saharan Africa is a place where your money will probably go a lot farther than most other places, you google around for some charities, then do a little background research, and send in your money.

But with donating time, where you're doing something physically, you're going to generate a lot more utility by doing something you already know something about than by something you'd have to learn from scratch.

Research into existential risk and analysis of potential countermeasures is a very strong candidate for being the currently most cost-effective way to reduce existential risk.

Do I understand this correctly to mean that we should first try to scrutinize our models and methods before jumping to a conclusion about what we ought to do, e.g. solve friendly AI? I did agree, but since then have been told that the case for risks from AI is clear-cut and the methodologies outlined in the sequences sound.

Do I understand this correctly to mean that we should first try to scrutinize our models and methods before jumping to a conclusion about what we ought to do, e.g. solve friendly AI?

This is a little bit tricky in the case of Friendly AI, because Friendly AI is like the ultimate researcher of existential risks and potential countermeasures. But basically, currently there are three major options for those folk worried about x-risk and who want to help out with donations, at least as I see it. The first option is to donate to SIAI, perhaps earmarking it for Friendliness research. This option is for those who are familiar with all the arguments and either don't think it's Pascalian or don't mind it. The second option is to donate to FHI. They're actively researching possible existential risks, they're at Oxford, they're high status, you can explain their purpose to your friends and family, and they've proven they're pretty good at doing interesting research and publicizing it. Bostrom is effin' prolific. The third option is to save your cash and wait awhile. A better option might come up, you might get important info, et cetera. All three options seem reasonable to me. A fourth option might be to invest the money in your ability to wisely donate in the future; take a university course on probabilistic modeling or something.