There is a standard argument against diversification of donations, popularly explained by Steven Landsburg in the essay Giving Your All. This post is an attempt to communicate a narrow special case of that argument in a form that resists misinterpretation better, for the benefit of people with a bit of mathematical training. Understanding this special case in detail might be useful as a stepping stone to the understanding of the more general argument. (If you already agree that one should donate only to the charity that provides the greatest marginal value, and that it makes sense to talk about the comparison of marginal value of different charities, there is probably no point in reading this post.)1
Suppose you are considering two charities, one that accomplishes the saving of antelopes, and the other the saving of babies. Depending on how much funding these charities secure, they are able to save respectively A antelopes and B babies, so the outcome can be described by a point (A,B) that specifies both pieces of data.
Let's say you have a complete transitive preference over possible values of (A,B), that is you can make a comparison between any two points, and if you prefer (A1,B1) over (A2,B2) and also (A2,B2) over (A3,B3), then you prefer (A1,B1) over (A3,B3). Let's further suppose that this preference can be represented by a sufficiently smooth real-valued function U(A,B), such that U(A1,B1)>U(A2,B2) precisely when you prefer (A1,B1) to (A2,B2). U doesn't need to be a utility function in the standard sense, since we won't be considering uncertainty, it only needs to represent ordering over individual points, so let's call it "preference level".
Let A(Ma) be the dependence of the number of antelopes saved by the Antelopes charity if it attains the level of funding Ma, and B(Mb) the corresponding function for the Babies charity. (For simplicity, let's work with U, A, B, Ma and Mb as variables that depend on each other in specified ways.)
You are considering a decision to donate, and at the moment the charities have already secured Ma and Mb amounts of money, sufficient to save A antelopes and B babies, which would result in your preference level U. You have a relatively small amount of money dM that you want to distribute between these charities. dM is such that it's small compared to Ma and Mb, and if donated to either charity, it will result in changes of A and B that are small compared to A and B, and in a change of U that is small compared to U.
Let's say you split the sum of money dM by giving its part dMa=s·dM (0≤s≤1) to A and the remaining part dMb=(1−s)·dM to B. The question is then what value of s should you choose. Donating everything to A corresponds to s=1 and donating everything to B corresponds to s=0, with values in between corresponding to splitting of the donation.
Donating s·dM to A results in its funding level becoming Ma+dMa, or differential funding level of dMa, and in A+dA = A+(∂A/∂Ma)·dMa = A+(∂A/∂Ma)·s·dM antelopes getting saved, with differential number of antelopes saved being (∂A/∂Ma)·s·dM, correspondingly the differential number of babies saved is (∂B/∂Mb)·(1−s)·dM. This results in the change of preference level dU = (∂U/∂A)·dA+(∂U/∂B)·dB = (∂U/∂A)·(∂A/∂Ma)·s·dM+(∂U/∂B)·(∂B/∂Mb)·(1−s)·dM. What you want is to maximize the value of U+dU, and since U is fixed, you want to maximize the value of dU.
Let's interpret some of the terms in that formula to make better sense of it. (∂U/∂A) is current marginal value of more antelopes getting saved, according to your preference U, correspondingly (∂U/∂B) is the marginal value of more babies getting saved. (∂A/∂Ma) is current marginal efficiency of the Antelopes charity at getting antelopes saved for a given unit of money, and (∂B/∂Mb) is the corresponding value for the Babies charity. Together, (∂U/∂A)·(∂A/∂Ma) is the value you get out of donating a unit of money to charity A, and (∂U/∂B)·(∂B/∂Mb) is the same for charity B. These partial derivatives depend on the current values of Ma and Mb, so they reflect only the current situation and its response to relatively small changes.
The parameter you control is s, and dM is fixed (it's all the money you are willing to donate to both charities together) so let's rearrange the terms in dU a bit: dU = (∂U/∂A)·(∂A/∂Ma)·s·dM+(∂U/∂B)·(∂B/∂Mb)·(1−s)·dM = (s·((∂U/∂A)·(∂A/∂Ma)−(∂U/∂B)·(∂B/∂Mb))+(∂U/∂B)·(∂B/∂Mb))·dM = (s·K+L)·dM, where K and L are not controllable by your actions (K = (∂U/∂A)·(∂A/∂Ma)−(∂U/∂B)·(∂B/∂Mb), L = (∂U/∂B)·(∂B/∂Mb)).
Since dM and s are nonnegative, we have two relevant cases in the maximization of dU=(s·K+L)·dM: when K is positive, and when it's negative. If it's positive, then dU is maximized by boosting K's influence as much as possible by setting s=1, that is donating all of dM to charity A. It it's negative, then dU is maximized by reducing K's influence as much as possible by setting s=0, that is donating all of dM to charity B.
What does the value of K mean? It's the difference between (∂U/∂A)·(∂A/∂Ma) and (∂U/∂B)·(∂B/∂Mb), that is between the marginal value you get out of donating a unit of money to A and the marginal value of donating to B. The result is that if the marginal value of charity A is greater than the marginal value of charity B, you donate everything to A, otherwise you donate everything to B.
1: This started as a reply to Anatoly Vorobey, but grew into an explanation that I thought might be useful to others in the future, so I turned it into a post.
There's additional issue concerning imperfect evaluation.
Suppose we made a charity evaluator based on Statistical Prediction Rules, which perform pretty well. There is an issue though. The charities will try to fake the signals that SPR evaluates. SPR is too crude to resist deliberate cheating. Diversification then decreases payoff for such cheating; sufficient diversification can make it economically non viable for selfish parties to fake the signals. Same goes for any imperfect evaluation scheme, especially for elaborate processing of the information (statements, explanations, suggestions how to perform evaluation, et cetera) originating from the donation recipient.
You just can not abstract the imperfect evaluation as 'uncertainty' any more than you can abstract a backdoor in a server application as noise in the wire.
Diversification reduces the payoff for appearing better. Therefore it reduces the payoff of investing in fake signals of being better. But it also reduces the payoff of investments in actually being better! If a new project would increase humanitarian impact increases donations enough, then charities can afford to expand those efforts. If donations are insensitive to improvement, then the new pro... (read more)