The logic that you should donate only to a single top charity is very strong. But when faced with two ways of making the world better there's this urge deny the choice and do both. Is this urge irrational or is there something there?

At the low end splitting up your giving can definitely be a problem. If you give $5 here and $10 there it's depressing how much of your donations will be eaten up by processing costs:

The most extreme case I've seen, from my days working at a nonprofit, was an elderly man who sent $3 checks to 75 charities. Since it costs more than that to process a donation, this poor guy was spending $225 to take money from his favorite organizations.

By contrast, at the high end you definitely need to divide your giving. If a someone decided to give $1B to the AMF it would definitely do a lot of good. Because charities have limited room for more funding, however, after the first $20M or so there are probably other anti-malaria organizations that could do more with the money. And at some point we beat malaria and so other interventions start having a greater impact for your money.

Most of us, however, are giving enough that our donations are well above the processing-cost level but not enough to satisfy an organization's room for more funding. So what do you do?

If one option is much better than another then you really do need to make the choice. The best ones are enough better than the average ones that you need to buckle down and pick the best.

But what about when you're not sure? Even after going through all the evidence you can find you just can't decide whether it's more effective to take the sure thing and help people now or support the extremely hard to evaluate but potentially crucial work of reducing the risk that our species wipes itself out. The strength of the economic argument for giving only to your top charity is proportional to the difference between it and your next choice. If the difference is small enough and you find it painful to pick only one it's just not worth it: give to both.

(It can also be worth it to give to multiple organizations because of what it indicates to other people. I help fund 80,000 Hours because I think spreading the idea of effective altruism is the most important thing I can do. But it looks kind of sketchy to only give to metacharities, so I divide my giving between them and GiveWell's top pick.)

I also posted this on my blog

New to LessWrong?

New Comment
44 comments, sorted by Click to highlight new comments since: Today at 3:48 PM

Here's another reason one might want to diversify their altruistic spending, which may or may not be distinct from the reasons in Jeff's post: One's budget may be too small to see significantly diminishing marginal returns, but one may belong to a population of people which as a whole sees diminishing marginal returns. Then one might want to diversify one's spending so that other members of the population diversify their spending. Givewell used such reasoning when it suggested splitting one's donation among Givewell's top-rated charities so that other Givewell donors, following the same principle, would collectively cause Givewell's money-moved metric to reflect this diversity, which would in turn give Givewell better access to more charities.

Agreed – the arguments against diversification don't take into account the fact that individuals' choices will be more highly correlated if they use similar decision processes. In order to make an optimal allocation decision you would need to estimate the total amount of money being donated by people who are using an algorithm similar to yours and decide as though you were allocating the entire amount. Sounds familiar, eh?

  • Agree with purchasing non-sketchiness signalling and utilons separately. This is especially important if like jkaufman a lot of your value comes as an effective altruist role model

  • Agree that if diversification is the only way to get the elephant to part with its money then it might make sense.

  • Similarly, if you give all your donations to a single risky organization and they turn out to be incompetent then it might demotivate your future self. So you should hedge against this, which again can be done separately from purchasing the highest-expected-value thing.

  • Confused about what to do if we know we're in a situation where we're behaving far from rational agents but aren't sure exactly how. I think this is the case with purchasing xrisk reduction, and with failure to reach Aumann agreement between aspiring effective altruists. To what extent do the rules still apply?

  • Lots of valid reasons for diversification can also serve as handy rationalizations. Diversification feels like the right thing to do - and hey, here are the reasons why! I feel like diversification should feel like the wrong thing to do, and then possibly we should do it anyway but sort of grudgingly.

It can also be worth it to give to multiple organizations because of what it indicates to other people.

Here it is. Signalling has effects on the real world, and if I make other people contribute to charities, the total effect can be much higher than my personal contribution could do. Optimizing for the total effect is better than merely optimizing the effect of my money.

Let's assume there are two important charities, X and Y. I have two friends, A and B. The friend A would be willing to donate to charity X, assuming that someone else in his social sphere does too. But he would never donate to Y. On the other hand, the friend B would be willing to donate to charity Y, assuming that someone else in her social sphere does too; but she would never donate to X.

If I donate to both charities and give the relevant part of information to each of my friends, I can make A donate to X, and B donate to Y, which could be more useful than if both me and A donate all our money to X, but B does not donate anything.

Mathematically speaking, assume that me, A and B are willing to donate $100 each; charity X creates 2 utilons per dollar, charity Y creates 1 utilon per dollar. If I donate all $100 to X, my personal contribution to the world is 200 utilons, and together with A we create 200+200= 400 utilons. If I donate $50 to X and $50 to Y, my personal contribution to the world is 150 utilons, and together with A and B we create 150+200+100= 450 utilons. With these specific numbers, the second option is better.

But even if the difference among the charities is greater, I can improve the result by donating more to X and less to Y. Ideally, I should donate to Y only as much as B needs to be socially convinced to donate too.

Or you could donate in secret and lie to your friends, for 200+200+100 = 500 utilons, assuming you have no negative effects from lying.

Ideally, I should donate to Y only as much as B needs to be socially convinced to donate too.

Within the constraints of the model, B may have donation threshold, above which they'll be convinced to donate, but the size of the donation will always be constant, but in reality, the size of people's donations may be heavily influenced by how much they see their peers donating.

Needs experimenting on real humans. Imagine the following situations:

a) I donate $100 to charities, all of them B considers useful;
b) I donate $50 to charities, all of them B considers useful;
c) I donate $100 to charities, but only $50 to charities that B considers useful.

How much of a "social pressure" does situation c) make on a typical person B? As much as a), as much as b), or somewhere in between?

How much does the response depend on how exactly I present them the data? For example: "I donated $100 to charities, for example this one."

The strength of the economic argument for giving only to your top charity is proportional to the difference between it and your next choice. If the difference is small enough and you find it painful to pick only one it's just not worth it: give to both.

According to Brian Tomasik's estimates, a dollar donated to the most cost-effective animal charity is expected to prevent between 100 days and 51 years of suff ering on a factory farm. Even if you think this charity is only 5% more effective than your next choice, donating to this charity would alleviate between 5 days and 2.55 years of suffering more than would donating to the second best charity. On a very modest donation of, say, $200 per year, the difference amounts to between ~3 and ~500 years of suffering. In light of these figures, it doesn't seem that the fact that "you find it painful to pick only one" charity is, in itself, a good reason to pick both.

On a very modest donation of, say, $200 per year, the difference amounts to between ~3 and ~500 years of suffering. It doesn't seem that the fact that "you find it painful to pick only one" charity is, in itself, a good reason to pick both

If I'm giving $200/year there are lots of options I could take to improve my impact:

  • I could spend less on myself so I can give more.
  • I could earn more so I could give more.
  • I could put more time into choosing the most effective charity.
  • I could limit my donations to only my top charity, even when I think other charities are almost as good.

All of these are painful to myself but have benefits to others, so to maximize my positive impact I should prioritize them based on the ratio of self-pain to other-beneft. What I'm claiming here is that the last option has a poor ratio, for charities that are close enough together in impact.

Not directly relevant, but is there a LW post or other resource of similar or greater caliber defending the idea that we should be assigning significant moral weight to non-human animals?

There is a very simple meta-argument: whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable, due to a partial overlap between humans and non-humans in all the properties you might credibly regard as morally relevant.

In any case, I was using animal charities only because I'm more familiar with the relevant estimates of cost-effectiveness. On the plausible assumption that such charities are not many orders of magnitude more cost-effective than the most cost-effective human charity, the argument should work for human charities, too.

There is a very simple meta-argument: whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable, due to a partial overlap between humans and non-humans in all the properties you might credibly regard as morally relevant.

How about in-group affiliation with members of your own species?

Do you really believe that, when a creature suffers intensely, your reasons for relieving this creature's suffering derive from the fact that you share a particular genotype with this creature? If you were later told that a being whom you thought belonged to your species actually belongs to a different species, or to no species at all (a sim), would you suddenly lose all reason to help her?

I don't, but I don't dismiss the possibility that other people may; I've certainly known people who asserted such.

whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable

My argument for valuing humans is that they are human.

Really? Is it because you find the sequence of genes that define the human genome aesthetically pleasing? If science uncovered that some of the folks we now think are humans actually belonged to a different species, would you start eating them? If you became persuaded that you are living in a simulation, would you feel it's okay to kill these seemingly human sims?

Edit (2014-07-16): Upon re-reading this and some of my other comments in this thread, I realize my tone was unnecessarily combative and my interpretation of Qiaochu's arguments somewhat uncharitable. I apologize for this.

I disagree with the general attitude that my moral values need to cover all possible edge cases to be applicable in practice. I don't know about you, but in practice, I'm pretty good at distinguishing humans from nonhumans, and I observe that my brain seems to care substantially more about the suffering of the former than the latter.

And my brain's classification of things into humans and nonhumans isn't based on nucleotide sequences or anything that brains like mine wouldn't have had access to in the ancestral environment. When I dereference the pointer "humans" I get "y'know, like your mother, your father, your friends, hobos, starving children in Africa..." It points to the same things whether or not I later learn that half of those people are actually Homo neanderthalensis or we all live in a simulation. If I learn that I've always lived in a simulation, then the things I refer to as humans have always been things living in a simulation, so those are the things I value.

So I think either this constitutes a counterexample to your meta-argument or we should both taboo "human."

I don't understand your reply. In your earlier message you said that you value humans just because they are members of a certain species. But in your most recent message you say that your "brain's classification of things into humans and nonhumans isn't based on nucleotide sequences or anything that brains like mine wouldn't have had access to in the ancestral environment." Yet in order to know whether a being is a member of the species Homo sapiens, and hence whether he or she is a being that you morally value, you need to know whether his or her DNA contains a particular nucleotide sequence.

Earlier you also ask whether there are LW posts defending the idea that we should be assigning significant moral weight to non-human animals. The reaction to my earlier comment might provide a partial explanation why there are no such posts. When one makes a serious attempt to raise a few questions intended to highlight the absurd implications of a position defended by many members of this community, but universally rejected by the community of moral philosophers, folks here react by downvoting the corresponding comment and abstaining from actually answering those questions.

Still, I appreciate your effort to at least try to clarify what your moral views are.

This strikes me as a "is XYZ water" thing.

Like, man, I sure do like drinking water! Does what I like about it have anything to do with its being H20? Well, not really, so it wouldn't be fair to say that the intension of "water" in my claim to like it is H20.

What you like is the taste of water; if the liquid that you believe is water turns out to have a different molecular structure, you'd still like it as much. This example is illustrative, because it suggests that Qiaochu and others, contrary to what they claim, do not really care whether a creature belongs to a certain species, but only that it has certain characteristics that they associate to that species (sentience, intelligence, or what have you). But if this is what these people believe, (1) they should say so explicitly, and, more importantly, (2) they face the meta-argument I presented above.

I agree with Oligopsony that Qiaochu is not using "human" as a rigid designator. Furthermore, I don't think it's safe to assume that their concept of "human" is a simple conjunction or disjunction of simple features. Semantic categories tend to not work like that.

This is not to say that a moral theory can't judge some features like sentience to be "morally relevant". But Qiaochu's moral theory might not, which would explain why your argument was not effective.

If Qiaochu is not using "human" as a rigid designator, then what he cares for is not beings with a certain genome, but beings having certain other properties, such as intelligence, sentience, or those constitutive of the intensions he is relying upon to pick out the object of his moral concern. This was, in fact, what I said in my previous comment. As far as I can see, the original "meta-argument" would then apply to his views, so understood.

(And if he is picking out the reference of 'human' in some other, more complex way, as you suggest, then I'd say he should just tell us what he really means, so that we can proceed to consider his actual position instead of speculating about what he might have meant.)

Indeed, they are almost certainly picking out the reference of 'human' in a more complex way. Their brain is capable of outputting judgments of 'human' or 'not human', as well as 'kinda human' and 'maybe human'. The set of all things judged 'human' by this brain is an extensional definition for their concept of 'human'. The prototype theory of semantic categories tells us that this extension is unlikely to correspond to an intelligible, simple intension.

he should just tell us what he really means

Well, they could say that the property they care about is "beings which are judged by Qiaochu's brain to be human". (Here we need 'Qiaochu's brain' to be a rigid designator.) But the information content of this formula is huge.

You could demand that your interlocutor approximate their concept of 'human' with an intelligible intensional definition. But they have explicitly denied that they are obligated to do this.

So Qiaochu is not using 'human' in the standard, scientific definition of that term; is implying that his moral views do not face the argument from marginal cases; is not clearly saying what he means by 'human'; and is denying that he is under an obligation to provide an explicit definition. Is there any way one could have a profitable argument with such a person?

I guess so; I guess so; I guess so; and I guess so.

You are trying through argument to cause a person to care about something they do not currently care about. This seems difficult in general.

[-][anonymous]11y00

You are trying through argument to cause a person to care about something they do not currently care about.

It was Qiaochu who initially asked for arguments for caring about non-human animals.

[This comment is no longer endorsed by its author]Reply

I don't know of a good argument for that position, but there's good evidence that some of the universal emotions discussed in CFAR's "Emotional API" unit (namely SEEKING, RAGE, FEAR, LUST, CARE, PANIC/GRIEF and PLAY) are experienced by nonhuman mammals. That fact might cause one to care more about animals.

Good question, how would one do this consistently? If you value agency/intelligence, you have to develop a metric which does not lead to stupid results, like having your utility function being overwhelmed by insects and bacteria, due to their sheer numbers. Of course, one can always go by cuteness.

Is that necessarily stupid? Obviously it is if you only value agency/intelligence, and it's an empirical question whether insects and bacteria have the other characteristics you may care about, but given that it is an empirical question the only acceptable response seems to be to shut up and calculate.

Also not directly relevant, but is there any argument opposed to prioritizing animals of higher intelligence and "capacity for suffering", such as primates and cetaceans?

Personally, while I assign negative utility to animals suffering in factory farms, I adjust for the mental capacity of the animals in question (in broad terms "how much do I care about this animal's suffering relative to a human's?") and in many cases this is the controlling factor of the calculation. If I were deciding between charities which prevented human suffering on that order, clearly the difference between top charities would outweigh the magnitude of my suffering, but when the animals in question are mostly chickens, it's not clear to me that this is still the case. I discount tremendously on the suffering of a creature capable of this relative to humans.

I wasn't arguing that you should donate to non-human animal charities. I was arguing that if you do donate to non-human animal charities, you should donate solely to the most cost-effective such charity, even if you would get more fuzzies by splitting your donation between two or more charities. I was also implicitly suggesting that if you believe that non-human animal human charities are comparably cost-effective, the argument generalizes to human charities, too. Discounting the suffering of non-human animals only serves to strengthen my argument, since it decreases the cost-effectivenss of non-human animal charities relative to that of human charities.

Discounting the suffering of non-human animals only serves to strengthen my argument, since it decreases the cost-effectivenss of non-human animal charities relative to that of human charities.

In that case, I'm not sure what your original argument was.

In that case, I'm not sure what your original argument was.

The argument was explained in the sentences immediately preceding the one you quoted.

The painfulness of the decision is also a form of disutility that has to be balanced against the difference between the charities though, which was the point of my original comment. If the difference between the values of the donations, when adjusted for the species involved, is less utility than the amount you personally lose from agonizing over how to apportion your donation, splitting it may result in higher utility overall.

Obviously, this is heavily dependent on how large the utility differences between the top charities are; if it weren't, my comment about discounting the suffering of less intelligent species wouldn't have been relevant.

You didn't mention my sole reason for donating to multiple charities: I consider some of them obligations. For example, if I benefit from a service or have in the past, I consider that I owe it to the ongoing enterprise/community to support it in turn. But once those are satisfied, then it's Givewell's top pick for the serious money.

Even a dyed-in-the-wool consequentialist could reason similarly, given Viliam_Bur's point about encouraging others to donate. Considering certain things "obligatory" can have good consequences like that. But for me, no such indirect route is needed.

I know people at 2 of the 3 charities I support. It seems likely that part of my donation is the emotional support / social proof to those people that I consider their work important enough to fund. It's not clear to me how much value that adds, though, relative to the difference in expected return from those charities.

[-][anonymous]11y00

It does seem like there are significant side effects to donating to multiple charities, In that, if you've never donated before, I think a charity would generally prefer that you have donated at least once.

(I.E, if after processing costs, a charity had a choice between raising 1000 dollars after processing costs from one new donor, or 1000 dollars after processing costs from 100 new donors, I think most charities would choose the second for several potential reasons: Larger donor list, more publicity, etc, more potential warm call targets, more people that can be referred to allied charities, more people that can be asked to volunteer, a more reliable source of funding in general (single donors provide sporadic funding, whereas a large number of donors can provide a more regular stream of income. [I think this is why Charities also like people who have promised monthly donations.])

However, if you are absolutely certain you were NEVER going to give to those charities again, or that you gave money to charity based solely on calculations and were utterly immune to charity advertising, then you don't want to add yourself to the lists, because the charity will waste time and money effort trying to contact you for additional donations that are not forthcoming. In that case, it might actually be net beneficial to give anonymously or to give to as few charities as possible.

This leads me to what feels like a somewhat unusual conclusion: If you're sure that donating money all to one charity is beneficial, then it is. If you aren't sure, and are sort of thinking "Well, maybe I should and maybe I shouldn't, they're all good..." Then you are probably more likely to be susceptible to advertising from both and would likely donate more money net by getting exposure to as much charity advertising as possible, so you should probably do that . If you simply feel you don't know enough about the charities work to make a firm decision, then spreading your donations out is also an easy way to get charities happily sending you more information about themselves.

I'm not sure if this is really relevant, but there's the possibility of some diversification effect playing a role. It could be relevant if the spendings of the different charities are somewhat correlated. This is pure speculation from my part, of course, since I have no idea about how to effectively compute such a quantity.

even if you were risk-averse in lives saved, which I do not think you should be, you should give all your donation to the charity that most aids the global diversification program. Splitting your donations implies being risk-averse in what you personally achieve, which is perverse.

Being risk-averse with respect to wealth utility is reasonable, and is empirically verified to be the case with most people. Wealth utility is a special case of the more general concept of utility regarding outcomes. Risk-averseness is reasonable for wealth utility because the risk is personal. The risk that the donation with the highest expected saving of lives in fact saves fewer lives than another donation is not a personal risk. So, I agree that, assuming accurate information about the probabilities, you should donate to get the maximum expected bang for the buck.

you should give all your donation to the charity that most aids the global diversification program. Splitting your donations implies being risk-averse in what you personally achieve, which is perverse.

Well, you have to have a very bizarre utility function, for sure. ;)

even if you were risk-averse in lives saved, which I do not think you should be

I'm not sure about this point. I can imagine having a preference for saving at least X lives, versus an outcome with equal mean, but a more broadly distributed probability function.

I can imagine having a preference for saving at least X lives

I feel like you've got a point here but I'm not quite getting it. Our preferences are defined over outcomes, and I struggle to see how "saving X lives" can be seen as an outcome - I see outcomes more along the lines of "X number of people are born and then die at age 5, Y number of people are born and then die at age 70". You can't necessarily point to any individual and say whether or not they were "saved".

I generally think of "the utility of saving 6 lives" as a shorthand for something like "the difference in utility between (X people die at age 5, Y people die at age 70) and (X-6 people die at age 5, Y+6 people die at age 70)".

We'd have to use more precise language if that utility varies a lot for different choices of X and Y, of course.