A common belief within the Effective Altruism movement that you should not diversify charity donations when your donation is small compared to the size of the charity. This is counter-intuitive, and most people disagree with this. A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified has already been written, but it uses a simplistic model. Perhaps you're uncertain about which charity is best, charities are not continuous, let alone differentiable, and any donation is worthless unless it gives the charity enough money to finally afford another project, your utility function is nonlinear, and to top it all off, rather than accepting the standard idea of expected utility, you are risk-averse.

Standard Explanation:

If you are too lazy to follow the link, or you just want to see me rehash the same argument, here's a summary.

The utility of a donation is differentiable. That is to say, if donating one dollar gives you one utilon, donating another dollar will give you close to one utilon. Not exactly the same, but close. This means that, for small donations, it can be approximated as a linear function. In this case, the best way to donate is to find the charity that has the highest slope, and donate everything you can to it. Since the amount you donate is small compared to the size of the charity, a first-order approximation will be fairly accurate. The amount of good you do with that strategy is close to what you predicted it would do, which is more than you'd predict of any other strategy, which is close to what you'd predict for them, so even if this strategy is sub-optimal, it's at least very close.

Corrections to Account for Reality:

Uncertainty:

Uncertainty is simple enough. Just replace utility with expected utility. Everything will still be continuous, and the reasoning works pretty much the same.

Nonlinear Utility Function:

If your utility function is nonlinear, this is fine as long as it's differentiable. Perhaps saving a million lives isn't a million times better than saving one, but saving the millionth life is about as good as the one after that, right? Maybe each additional person counts for a little less, but it's not like the first million all matter the same, but you don't care about additional people after that.

In this case, the effect of the charity is differentiable with respect to the donation, and the utility is differentiable with respect to the effect of the charity, so the utility is differentiable with respect to the donation.

Risk-Aversion:

If you're risk-averse, it gets a little more complicated.

In this case, you don't use expected utility. You use something else, which I will call meta-utility. Perhaps it's expected utility minus the standard deviation of utility. Perhaps it's expected utility, but largely ignoring extreme tails. What it is is a function from a random variable representing all the possibilities of what could happen to the reals. Strictly speaking, you only need an ordering, but that's not good enough here, since it needs to be differentiable.

Differentiable is more confusing in this case. It depends on the metric you're using. The way we'll be using it here is that having a sufficiently small probability of a given change, or a given probability of a sufficiently small change, counts as a small change. For example, if you only care about the median utility, this isn't differentiable. If I flip a coin, and you win a million dollars if it lands on heads, then you will count that as worth a million dollars if the coin is slightly weighted towards heads, and nothing if it's slightly weighted towards tails, no matter how close it is to being fair. But that's not realistic. You can't track probabilities that precisely. You might care less about the tails, so that only things in the 40% - 60% range matter much, but you're going to pick something continuous. In fact, I think we can safely say that you're going to pick something differentiable. If I add a 0.1% chance of saving a life given some condition, it will make about the same difference as adding another 0.1% chance given the same condition. If you're risk-averse, you'd care more about a 0.1% chance of saving a life it's takes effect during the worst-case scenario than the best-case, but you'd still care about the same for a 0.1% chance of saving a life during the worst case as for upgrading it to saving two lives in that case.

Once you accept that it's continuous, the same reasoning follows as with expected utility. A continuous function of a continuous function is continuous, so the meta-utility of a donation with respect to the amount donated is continuous.

To make the reasoning more clear, here's an example:

Charity A saves one life per grand. Charity B saves 0.9 lives per grand. Charity A has ten million dollars, and Charity B has five million. One or more of these charities may be fraudulent, and not actually doing any good. You have $100, and you can decide where to donate it.

The naive view is to split the $100, since you don't want to risk spending it on something fraudulent. That makes sense if you care about how many lives you save, but not if you care about how many people die. They sound like they're the same thing, but they're not.

If you donate everything to Charity A, it has $10,000,100 and Charity B has $5,000,000. If you donate half and half, Charity A has $10,000,050 and Charity B has $5,000,050. It's a little more diversified. Not much more, but you're only donating $100. Maybe the diversification outweighs the good, maybe not. But if you decide that it is diversifying enough to matter more, why not donate everything to Charity B? That way, Charity A has $10,000,000, and Charity B has $5,000,100. If you were controlling all the money, you'd probably move a million or so from Charity A to Charity B, until it's well and truly diversified. Or maybe it's already pretty close to the ideal and you'd just move a few grand. You'd definitely move more than $100. There's no way it's that close to the optimum. But you only control the $100, so you just do as much as you can with that to make it more diversified, and send it all to Charity B. Maybe it turns out that Charity B is a fraud, but all is not lost, because other people donated ten million dollars to Charity A, and lots of lives were saved, just not by you.

Discontinuity:

The final problem to look at is that the effects of donations aren't continuous. The time I've seen this come up the most is when discussing vegetarianism. If you don't it meat, it's not going to make enough difference to keep the stores from ordering another crate of meat, which means exactly the same number of animals are slaughtered.

Unless, of course, you were the straw that broke the camel's back, and you did keep a store from ordering a crate of meat, and you made a huge difference.

There are times where you might be able to figure that out before-hand. If you're deciding whether or not to vote, and you're not in a battleground state, you know you're not going to cast the deciding vote, because you have a fair idea of who will win and by how much. But you have no idea at what point a store will order another crate of meat, or when a charity will be able send another crate of mosquito nets to Africa, or something like that. If you make a graph of the number of crates a charity sends by percentile, you'll get a step function, where there's a certain chance of sending 500 crates, a certain chance of sending 501, etc. You're just shifting the whole thing to the left by epsilon, so it's a little more likely each shipment will be made. What actually happens isn't continuous with respect to your donation, but you're uncertain, and taking what happens as a random variable, it is continuous.

A few other notes:

Small Charities:

In the case of a sufficiently small charity or large donation, the argument is invalid. It's not that it takes more finesse like those other things I listed. The conclusion is false. If you're paying a good portion of the budget, and the marginal effects change significantly due to your donations, you should probably donate to more than one charity even if you're not risk-averse and your utility function is linear.

I would expect that the next best charity you manage to find would be worse by more than a few percent, so I really doubt it would be worth diversifying unless you personally are responsible for more than a third of the donations.

An example of this is keeping money for yourself. The hundredth dollar you spend on yourself has about a tenth of the effect the thousandth does, and the entire budget is donated by you. The only time you shouldn't diversify is if the marginal benefit of the last dollar is still higher than what you could get donating to charity.

Another example is avoiding animal products. Avoiding steak is much more cost-effective than avoiding milk, but once you've stopped eating meat, you're stuck with things like avoiding milk.

Timeless Decision Theory:

If other people are going to make similar decisions to you, your effective donation is larger, so the caveats about small charities applies. That being said, I don't think this is really much of an issue.

If everyone is choosing independently, even if most of them correlate, the end result will be that the charities get just enough funding that some people donate to some and others donate to others. If this happens, chances are that it would be worth while for a few people to actually split their investments, but it won't make a big difference. They might as well just donate it all to one.

I think this will only become a problem if you're just donating to the top charity on GiveWell, regardless of how closely they rated second place, or you're just donating based purely on theory, and you have no idea if that charity is capable of using more money.

New to LessWrong?

New Comment
73 comments, sorted by Click to highlight new comments since: Today at 7:15 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Moral pluralism or uncertainty might give a reason to construct a charity portfolio which serves multiple values, as might emerge from something like the parliamentary model.

2DanielLC10y
I still don't think it would be a good idea to diversify. If the parliament doesn't know the budget of each charity beforehand, they would be able to improve on the normal decisions by betting on it. For example, if Alice wants to donate to Charity A, and Bob wants to donate to Charity B, they could agree to donate half to each, but they'd be better off donating to the one that gets less money.
0Nisan10y
A parliament that can make indefinitely binding contracts turns into a VNM-rational agent. But a parliament that can't make binding contracts might always diversify.
0Squark10y
If the parliament consists of UDT agents then effectively it can make binding contracts.

This post has a number of misconceptions that I would like to correct.

It is a truism within the Effective Altruism movement that you should not diversify charity donations.

Not really. Timeless decision theory considerations suggest that you actually should be splitting your donations, because globally we should be splitting our options. I think many other effective altruists take this stance as well. (See below for explanation.)

Nonlinear Utility Function:

If your utility function is nonlinear, this is fine as long as it's differentiable.

Not necessar... (read more)

0Squark10y
The TDT argument is not solid (see the OP's reply in the post)

If you're risk-averse, it gets a little more complicated. In this case, you don't use expected utility

As long as you're a rational agent, you have to use expected utility. See VNM theorem.

1solipsist10y
To be clear: your VNM utility function does not have to correspond directly to utilitarian utility if you are not a strict utilitarian. Even if you are a strict utilitarian, diversifying donations can still, in theory, be VNM rational. E.g.: A trustworthy Omega appears. He informs you that if you are not personally are responsible for saving 1,000 QALYs, he will destroy the earth. If you succeed, he will leave the earth alone. Under these contrived conditions, the amount of good you are responsible for is important, and you should be very risk-averse with that quantity. If there's even a 1 in a million risk that the 7 effective charities you donated to were all, by coincidence, frauds, you would be well advised to donate to an eighth (even though the eighth charity will not as effective as the other seven).
0Squark10y
Diversifying donations is not rational as long as the marginal utility per dollar generated by a charity is affected negligibly by the the small sum you are donating. This assumption seems correct for a large class of utility functions under realistic conditions.
0DanielLC10y
There are serious problems with not using expected utility, but even if you still decide to be risk-averse, this doesn't change the conclusion that you should only donate to one charity.
0Lumifer10y
That seems to be a rather narrow and not very useful definition of a "rational agent" as applied to humans.
0Squark10y
I think it is the correct definition in the sense that you should behave like one.
2Lumifer10y
Why should I behave as if my values satisfy the VNM axioms? Rationality here is typically defined as either epistemic (make sure your mental models match reality well) or instrumental (make sure the steps you take actually lead to your goals). Defining rationality as "you MUST have a single utility function which MUST follow VNM" doesn't strike me as a good idea.
0Squark10y
Because the VNM axioms seem so intuitively obvious that violating them strongly feels like making an error. Of course I cannot prove them without introducing another set of axioms which can be questioned in turn etc. You always need to start with some assumptions. Which VNM axiom would you reject?
2asr10y
I would reject the completeness axiom. I often face choices where I don't know which option I prefer, but where I would not agree that I am indifferent. And I'm okay with this fact. I also reject the transitivity axiom -- intransitive preference is an observed fact for real humans in a wide variety of settings. And you might say this is irrational, but my preference are what they are.
0Squark10y
Can you give an example of situations A, B, C for which your preferences are A > B, B > C, C > A? What would you do if you need to choose between A, B, C?
0asr10y
Sure. I'll go to the grocery store and have three kinds of tomato sauce and I'll look at A and B, and pick B, then B and C, pick C, and C and A, and pick A. And I'll stare at them indecisively until my preferences shift. It's sort of ridiculous -- it can take something like a minute to decide. This is NOT the same as feeling indifferent, in which case I would just pick one and go. I have similar experiences when choosing between entertainment options, transport, etc. My impression is that this is an experience that many people have. If you google "intransitive preference" you get a bunch of references -- this one has cites to the original experiements: http://www.stanford.edu/class/symbsys170/Preference.pdf
0Squark10y
It seems to me that what you're describing are not preferences but spur of the moment decisions. A preference should be thought of as in CEV: the thing you would prefer if you thought about it long enough, knew enough, were more the person you want to be etc. The mere fact you somehow decide between the sauces in the end suggests you're not describing a preference. Also I doubt that you have terminal values related to tomato sauce. More likely, your terminal values involve something like "experiencing pleasure" and your problem here is epistemic rather than "moral": you're not sure which sauce would give you more pleasure.
0asr10y
You are using preference to mean something other than I thought you were. I'm not convinced that the CEV definition of preference is useful. No actual human ever has infinite time or information; we are always making decisions while we are limited computationally and informationally. You can't just define away those limits. And I'm not at all convinced that our preferences would converge even given infinite time. That's an assumption, not a theorem. When buying pasta sauce, I have multiple incommensurable values: money, health, and taste. And in general, when you have multiple criteria, there's no non-paradoxical way to do rankings. (This is basically Arrow's theorem). And I suspect that's the cause for my lack of preference ordering.
1Squark10y
Of course. But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information. Money is not a terminal value for most people. I suspect you want money because of the things it can buy you, not as a value in itself. I think health is also instrumental. We value health because illness is unpleasant, might lead to death and generally interferes with taking actions to optimize our values. The unpleasant sensations of illness might well be commensurable with the pleasant sensations of taste. For example you would probably pass up a gourmet meal if eating it implies getting cancer.
2Lumifer10y
However you can not know what decisions you would make if you had infinite time and information. You can make guesses based on your ideas of convergence, but that's about it.
-3Squark10y
A Bayesian never "knows" anything. She can only compute probabilities and expectation values.
0Lumifer10y
Can she compute probabilities and expectation values with respect to decisions she would make if she had infinite time and information?
1Squark10y
I think it should be possible to compute probabilities and expectation values of absolutely anything. However to put it on a sound mathematical basis we need a theory of logical uncertainty.
-1Lumifer10y
On the basis of what do you think so? And what entity will be doing the computing?
1Squark10y
I think so because conceptually a Bayesian expectation value is your "best effort" to estimate something. Since you can always do your "best effort" you can always compute the expectation value. Of course, for this to fully make sense we must take computing resource limits into account. So we need a theory of probability with limited computing resources aka a theory of logical uncertainty.
-1Lumifer10y
Not quite. Conceptually a Bayesian expectation is your attempt to rationally quantify your beliefs which may or may not involve best efforts. That requires these beliefs to exist. I don't see why it isn't possible to have no beliefs with regard to some topic. That's not very meaningful. You can always output some number, but so what? If you have no information you have no information and your number is going to be bogus.
-1Squark10y
If you don't believe that the process of thought asymptotically converges to some point called "truth" (at least approximately), what does it mean to have a correct answer to any question? Meta-remark: Whoever is downvoting all of my comments in this thread, do you really think I'm not arguing in good faith? Or are you downvoting just because you disagree? If it's the latter, do you think it's good practice or you just haven't given it thought?
0Lumifer10y
There is that thing called reality. Reality determines what constitutes a correct answer to a question (for that subset of questions which actually have "correct" answers). I see no reason to believe that the process of thought converges at all, never mind asymptotically to 'some point called "truth"'.
0Squark10y
How do you know anything about reality if not through your own thought process?
0Lumifer10y
Through interaction with reality. Are you arguing from a brain-in-the-vat position?
0Squark10y
Interaction with reality only gives you raw sensory experiences. It doesn't allow you to deduce anything. When you compute 12 x 12 and it turns out to be 144, you believe 144 is the one correct answer. Therefore you implicitly assume that tomorrow you won't somehow realize the answer is 356.
0Lumifer10y
And what does that have to do with knowing anything about reality? Your thought process is not a criterion of whether anything is true.
0Squark10y
But it is the only criterion you are able to apply.
0Lumifer10y
Not quite. I can test whether a rock is hard by kicking it. But this byte-sized back-and-forth doesn't look particularly useful. I don't understand where you are coming from -- to me it seems that you consider your thought processes primary and reality secondary. Truth, say you, is whatever the thought processes converge to, regardless of reality. That doesn't make sense to me.
0Squark10y
When you kick the rock, all you get is a sensory experience (a quale, if you like). You interpret this experience as a sensation arising from your foot. You assume this sensation is the result of your leg undergoing something called "collision" with something called "rock". You deduce that the rock probably has a property called "hard". All of those are deductions you do using your model of reality. This model is generated from memories of previous experiences by a process of thought based on something like Occam's razor.
0Lumifer10y
OK, and how do we get from that to 'the process of thought asymptotically converges to some point called "truth"'?
0Squark10y
Since the only access to truth we might have is through our own thought, if the latter doesn't converge to truth (at least approximately) then truth is completely inaccessible.
0Lumifer10y
Why not? Granted that we have access to reality only through mental constructs and so any approximations to "the truth" are our own thoughts, but I don't see any problems with stating that sometimes these mental constructs adequately reflect reality (=truth) and sometimes they don't. I don't see where this whole idea of asymptotic convergence is coming from. There is no guarantee that more thinking will get you closer to the truth, but on the other hand sometimes the truth is right there, easily accessible.
0Squark10y
I apologize but this discussion seems to be going nowhere.
0Lumifer10y
Agreed.
0Eugine_Nier10y
So you care more about following the VNM axioms, then which utility function you are maximizing? That behavior is itself not VNM rational.
0Squark10y
If you don't follow the VNM axioms you are not maximizing any utility function.
-2Eugine_Nier10y
So why do you care about maximizing any utility function?
0Squark10y
What would constitute a valid answer to that question, from your point of view?
-1Eugine_Nier10y
I can't think of one. You're the one arguing for what appears to be an inconsistent position.
0Squark10y
What is the inconsistency?
-2Eugine_Nier10y
Saying one should maximize a utility function, but not caring which utility function is maximized.
2Squark10y
Who said I don't care which utility function is maximized?
-2Lumifer10y
"feels like" is a notoriously bad criterion :-) Before we even get to VNM axioms I would like to point out that humans do not operate in a VNM setting where a single consequentialist entity is faced with a sequence of lotteries and is able to express his preferences as one-dimensional rankings. Haven't there been a lot of discussion about the applicability of VNM to human ethical systems? It looks like a well-trodden ground to me.
1DanielLC10y
He doesn't express the entire ranking, but he does still have to choose the best option.
0Squark10y
What would be a good criterion? You cannot pull yourself up by your bootstraps. You need to start from something. How would you want to operate? You mentioned instrumental rationality. I don't know how to define instrumental rationality without the VNM setting (or something similar).
0Lumifer10y
Mismatch with reality. Well, the locally canonical definition is this: I see nothing about VNM there.
-1Squark10y
I'm not following This is a nice motto, but how do you make a mathematical model out of it?
-1Lumifer10y
Well, you originally said " violating them strongly feels like making an error. " I said that "feels like" is a weak point. You asked for an alternative. I suggested mismatch with reality. As in "violating X leads to results which do not agree with what we know of reality". We were talking about how would a human qualify as a "rational agent". I see no need to make mathematical models here.
0Squark10y
This only makes sense in epistemic context, not in instrumental one. How can a way of making decisions "not agree with what we know of reality"? Note that I'm making a normative statement (what one should do), not a descriptive statement ("people usually behave in such-and-such way"). There is always a need to make mathematical models since before you have a mathematical model your understanding is imprecise. For example, a mathematical model allows you to prove than under certain assumptions diversifying donations is irrational.
0Lumifer10y
Ever heard of someone praying for a miracle? Bollocks! I guess next you'll be telling me I can not properly understand anything which is not expressed in numbers... :-P
0Squark10y
There is nothing intrinsic to the action of "praying for a miracle" which "disagrees with reality". It's only when we view this action in the context of a decision theory which says e.g. "choose the action which leads to maximal expected utility under the Solomonoff prior" can we say the action is "irrational" because, in fact, it does not lead to maximal expected utility. But in order to make this argument you need to assume a decision theory.
0Lumifer10y
Given the definition of a miracle, I think there is, but anyway -- I'm willing to go out on a limb, take the shortcut, and pronounce praying for a miracle to fail instrumental rationality. Without first constructing a rigorous mathematical model of the expected utility under the Solomonoff prior. YMMV, of course.
0Bobertron10y
Ergo, if you're risk-averse, you aren't a rational agent. Is that correct?
5Squark10y
Depends how you define "risk averse". When utility is computed in terms on another parameter, diminishing returns result in what appears like "risk averseness". For example, suppose that you assign utility 1u to having 1000$, utility 3u to having 4000$ and utility 4u to having 10000$. Then, if you currently have 4000$ and someone offers you to participate in a lottery in which you have a 50% chance of losing 3000$ and a 50% chance of gaining 6000$, you will reject it (in spite of an expected gain of 1500$) since your expected utility for not participating is 3u whereas your expected utility for participating is 2.5u.

I worry that pushing on this will make smaller donors not want to donate. All eggs in one basket makes people nervous. And they are in a situation where we don't want any additional negative affect, they are already painfully giving money away.

Risk aversion is not unselfish. It implies a willingness to trade away expected good for greater assurance that you were responsible for good. I wouldn't fault you for that choice, but it's not effective altruism in the egoless sense.

3solipsist10y
I'd like to reemphasize that if you donate to multiple effective charities you are doing awesome stuff. Switching from average charities to a diversified portfolio of effective charities can make you hugely more effective -- it's like turning yourself into 10 people. Switching from a diversified portfolio of effective charities to the single most effective charity might make you maybe a few percentage points more effective. That's not nearly as important as doing whatever makes your brain enthusiastic about effective altruism. *The point I'm made in the parent comment is not of practical concern. *I am making up these numbers -- don't quote me on this.
2DanielLC10y
Not generally. It's usually just there to counteract overconfidence bias. You want something that will never fail instead of something that will fail 1% of the time, because something that you think will never fail will only fail about 1% of the time, and something that you think will fail 1% of the time will fail around 10% of the time. It's much more than the apparent 1% advantage. If you donate all your money to Deworm the World because you want a lot of good to still get done if SCI turns out to be a fraud, you're not being selfish. If you donate half you money because you personally want to be doing the good even if one of them is a fraud, then you're selfish. I alluded to this with the sentence:

There are timeless decision theory and coordination-without-communication issues that make diversifying your charitable contributions worthwhile.

In short, you're not just allocating your money when you make a contribution, but you're also choosing which strategy to use for everyone who's thinking sufficiently like you are. If the optimal overall distribution is a mix of funding different charities (say, because any specific charity has only so much low-hanging fruit that it can access), then the optimal personal donation can be mixed.

You can model this by ... (read more)

0Squark10y
This is already addressed in the post (a late addition maybe?)
0ThrustVectoring10y
Yeah, it wasn't there when I posted the above. The "donate to the top charity on GiveWell" plan is a very good example of what I was talking about.
2Squark10y
This plan can work if GiveWell adjust their top charity as a function of incoming donations sufficiently fast. For example, if GiveWell have precomputed the marginal utility per dollar of each donation as a function of its budget and they have access to a continuously updated budget figure for each charity, they can create an automatically updated "top charities" page.

I think that if the "a few other notes" section and the comments on moral parliaments are integrated more cleanly into the post, it will well deserve to be in main. Don't know why it only got +2.

Most people set aside an amount of money they spend an charity, and an amount they spend on their own enjoyment. It seems to me that whatever reasoning is behind splitting money between charity and yourself, can also support splitting money between multiple charities.

6DanielLC10y
There is a reason that I probably should have made more clear in the article. I'll go back and fix it. The reasoning assumes that your donation is small compared to the size of the charity. For example, donating $1,000 a year to a charity that spends $10,000,000 a year. Keeping money for yourself can be thought of as a charity. Even if you're partially selfish and you value yourself as a thousand strangers, the basic reasoning still works the same. The reason you keep some for yourself is that it's a small charity. The amount you donate makes up 100% of its budget. As a result, it cannot be approximated as a linear function. A log function seems to work better. I should add that there is still something about that that's often overlooked. If you're spending money on yourself because you value your happiness more than others, the proper way to donate is to work out how much money you have to have before the marginal benefit to your happiness is less than the amount of happiness that would be created by donating to others, and donating everything after that. There are other reasons to keep money for yourself. Keeping yourself happy can improve your ability to work and by extension make money. The thought of having more money can be incentive to work. Nonetheless, I don't think you should be donating anywhere near a fixed fraction of your income. I mean, it's not going to hurt much if you decide to only donate 90% no matter how rich you get, but if you don't feel like you can spare more than 10% now, and you become as rich as Bill Gates, you shouldn't be spending 90% of your money on yourself.
0Bobertron10y
Oh, interesting. I assumed the reason I keep anything beyond the bare minimum to myself is that I'm irrationally keeping my own happiness and the well-being of strangers as two separate, incomparable things. I probably prefer to see myself as irrational compared to seeing myself as selfish. The concept I was thinking of (but didn't quite remember) when I wrote the comment was Purchase Fuzzies and Utilons Separately.