Charitable giving decisions are not made simultaneously. So to a certain degree you can look to see what charities other people are giving to and consider room-for-more-funding accordingly.
Holden Karnofsky used similar reasoning in his charity recommendations for 2012:
For donors who think of themselves as giving not only to help the charity in question but to help GiveWell, we encourage allocating your dollars in the same way that you would ideally like to see the broader GiveWell community allocate its dollars. If every GiveWell follower follows this principle, we’ll end up with an overall allocation that reflects a weighted average of followers’ opinions of the appropriate allocation. (By contrast, if every GiveWell follower reasons “My personal donation won’t hit diminishing returns, so I’ll just give exclusively to my top choice,” the overall allocation is more likely to end up “distorted.”)
Effective altruists' decisions don't correlate with most philanthropists', but effective altruists' decisions correlate with each other. So this may have been an appropriate time to use timeless reasoning.
I think in that case it's less that we're effective altruists and more that we're accepting GiveWell's recommendations.
Randomizing works just as well as diversifying. Since the correlation between people's decisions is far from perfect, it's effectively randomized. No need to do anything yourself.
Alternatively, if you do find yourself in a group with similar preferences to you, you can collude. Failing that, you can assign your entire donation to an individual charity, choosing randomly weighted by your priority, and thus reduce the impact of transaction fees and other per-donor costs on your donations (such as marketing materials to encourage people to donate again).
Since the correlation between people's decisions is far from perfect, it's effectively randomized.
I don't follow, can you elaborate?
My decision will correlate with other people's, but there will still be enough variation that it's not likely to result in a charity getting more money than they know what to do with.
Like DanielLC, I never really understood this application of Timeless Decision Theory. It seems that, in practice, people's decisions rarely seem to correlate with mine in any sort of meaningful way such that if I chose to change my action, so would they.
Once EA is a popular enough movement that this begins to become an issue, I expect communication and coordination will be a better answer than treating this like a one-shot problem. Maybe we'll end up with meta-charities as the equivalent of index funds, that diversify altruism to worthy causes without saturating any given one. Maybe the equivalent of GiveWell.org at the time will include estimated funding gaps for their recommended charities, and track the progress, automatically sorting based on which has the largest funding gap and the greatest benefit.
I doubt that at any point it will make sense for individuals should be personally choosing, ranking, and donating their own money to charities as if they're choosing the ratios for everyone TDT-style, not least because of the unnecessary redundancy.
EDIT: Upvoted because it is a valid concern. The AMF reached saturation relatively quickly, and may have exceeded the funding it needed. I just disagree with the efficiency of this particular solution to the problem.
If all donors to charities A and B were identical to you, then your decision to donate $d to charity A would be equivalent to a decision for all donors' funds to go to charity A rather than charity B
You assume that either all decisions are made simultaneously or that rational donors are insensitive to diminishing marginal returns as they observe greater funding inflows to A, neither of which ought to be the case.
In particular it wasn't the case this winter when we observed sufficiently lopsided funding flows into MIRI vs. CFAR and told some of our donors that their next marginal dollar ought to go to CFAR until their fundraising drive closed.
That said, I agree with a lot of what Holden said about this recently - does anyone have a link? - in which he pointed out that it would be unfortunate to have many 'rational' donors effectively trying to cancel out each others' allocation splits, and I don't actually object to Holden's commonsense approach (quoted below) about "allocating your dollars in the same way that you would ideally like to see the broader GiveWell community allocate its dollars". Or you could mix the two approaches and give some money in a way that matches what you think should be the overall distribution, and the rest to whichever charity you think is most neglected provided it is severely enough neglected.
I wouldn't mind seeing a formal analysis of why agents with certain types of noise of them would end up more robust if they didn't donate all to one charity. Maybe this would arise if we suppose that lots of people are in reality mostly insensitive to diminishing marginal utility and don't compute it very well or very exactly when splitting between charities that all already have "some funding".
However, this doesn't take into account timeless decision theory.
I'm not sure I would call it timeless decision theory. After all, you are not concerned with how you yourself would decide under similar circumstances so much as how other people would decide under similar circumstances. Maybe you could call it "spaceless decision theory." Or perhaps the "What if everyone did it" principle.
Jonah, do you think uncertainty about how to prioritize charities and causes is an argument for centralizing or for diversifying your donations?
Here are some tentative thoughts that I haven't run by anyone to check for soundness. They're not genuinely original to me – they've been floating around the effective altruism community in some form or other for a while – I just hadn't thought them through in sufficient detail to take them to their logical conclusion. I'd appreciate any feedback.
Suppose that the expected number of lives saved per additional dollar donated to charity A is x and the expected value of lives saved per additional dollar donated to charity B is y, where x and y are constants, and x > y. Then if you're trying to maximize the expected number of lives saved (c.f. The "Intuitions" Behind "Utilitarianism"), you should make all of your charitable contributions to charity A.
In practice, x and y will not be constant, because of room for more funding issues. So splitting one's donations can maximize number of lives saved, if x is sometimes smaller than y.
But suppose that you're donating $d, where increasing the charities' budgets by $d would leave the condition x > y unaltered. A common view is that one should then donate all $d to charity A.
However, this doesn't take into account timeless decision theory. If all donors to charities A and B were identical to you, then your decision to donate $d to charity A would be equivalent to a decision for all donors' funds to go to charity A rather than charity B, effectively constituting a decision for charity A to get a little bit more money in exchange for charity B existing altogether. If x > y doesn't always hold, this is not expected value maximizing. The other donors aren't identical to you, but their decisions are still correlated with yours on account of the psychological unity of humankind, and shared cultural backgrounds.
Suppose that x > y is not always true. For simplicity, suppose that the total amount that donors will donate to charities A and B is fixed and known to you. If there were no correlation between your decision making and that of other donors, then it would be that you should give all of your money to charity A. If the correlation between your decision making and that of the other donors was perfect, then your ratio of donations to charity A to charity B should be the same as the ratio of the total amount of funding that you think charity A should get to the total amount of funding that you think charity B should get. This raises the possibility that your actual split of donations should be somewhere between these two extremes, and in particular, that you should split donations.
In practice it won't always make sense to split donations: it might be that for any given charity A, there are many charities B with the property that x > y is not always true, such that it would be a logistical hassle to split one's donations between all of them. But when one has a small handful of charities that one is considering donating to, it may make sense to split one's donations, even when one is a small donor.
Moreover, charities are closer in cost-effectiveness than might initially meet the eye, so that the condition that x > y is not always true holds more often than might initially meet the eye. So the case is for splitting donations is stronger than might initially meet the eye, and the split should be more even than might initially meet the eye.