Thanks for writing this, Zach. After spending the last 2.5 years working as a grantmaker, a lot of this resonates with me!
Rather than flag the specific bits I agree with, I’ll just say: this seems to me like a pretty useful piece for anyone trying to understand the mental models many AI safety grantmakers tend to use in practice.
Curated. "How do we actually donate money to do good in the world" is still just a very important topic.
This seems relevant both to professional grantmakers, and to people (of which I'm seeing more of lately) who end up in some kind of "temporary or pseudo-grantmaker" position – from participating in a round of an evaluation process like SFF, or being a regranter, or simply ending with enough money from equity that it's worth starting to think like a grantmaker.
A lot of the ideas in this post are ones I've seen discussed in in-person conversations but not really written up in a legible way.
This is fantastic, tons of things I agree with strongly.
That said, my big undressed question is about scale; obviously it's easier to fund one $1m project than 5 $200k projects, but the smaller projects are often higher leverage. And that goes for smaller things too.
So taking this much further, in my experience lots of really great early stage opportunities are $5k or $10k grants (help someone write a paper, or fund a small experiment to check if a new idea works,) which can have as much expected impact as a marginal $200k on different opportunities; how do you manage these, both in terms of filtering and finding them, and managing the relatively very high overhead costs for them? (Or do you not find that this is true, or do you have a minimum?)
Good point; I agree small opportunities can be great.
how do you manage these, both in terms of filtering and finding them, and managing the relatively very high overhead costs for them?
This post is more like I have a priori observations than I know what processes work well in practice. I don't claim the latter. But since you asked:
I don't do a good job of finding small opportunities. When small opportunities come to my attention, my process is something like:
An abbreviated heuristic is: if it's in-scope and it seems great and it's hard to imagine regretting it substantially more than if you lit the money on fire, just fund all such small opportunities. Funding lots of small opportunities is better than funding few.
Note that being exploitable has downsides beyond wasting money. (Internet people reading this, please don't ask me for money because you read this; I'm very unlikely to give you money even for good things because my expertise is limited to a small fraction of good things.)
Probably in my domain relative to yours, (1) there's way fewer small one-off opportunities and (2) a greater fraction of them have substantial downside risk.
I really liked this, thank you for writing it.
If you have reading recommendations, please share!
What I didn't expect about being a funder by James Ozden came to mind.
On the BOTEC maximalism and your bar, can you say more? I guess I've been a bit cluster-pilled, especially in practice given how bad the thinking I've seen is in many BOTECs, so if anyone else said this I'd be skeptical, but I respect your thinking and I thought Eric's CEA of donating $1k to Alex Bores was good, so I'm intrigued.
More resources, namely two books we published as part of / in the wake of AIM's Grantmaking Training Program:
“Money is not a monolith.” is one of the best truisms I’ve seen in a while. I’m going to reuse that. Thanks!
Written to a new grantmaker.
The first three points are the most important.
Focus on opportunities many times your bar.
Most value comes from finding/creating projects many times your bar, rather than discriminating between opportunities around your bar. If you find/create a new opportunity to donate $1M at 10x your bar (and cause it to get $1M, which would otherwise be donated to a 1x thing), you generate $9M of value (at your bar).[1] If you cause a $1M at 1.5x opportunity to get funded or a $1M at 0.5x opportunity to not get funded, you generate $500K of value. The former is 18 times as good.
Adverse selection is extremely important.
Mostly this is winner's curse and a related phenomenon.
Prioritize between buckets.
Prioritization between buckets is more important than prioritization within buckets. The typical intervention in a great bucket is >>10x as good as the typical intervention in a mediocre bucket. This is not priced in; the best buckets are not as popular as they should be.
Information value
Sometimes information is very good.
E.g.: how good various desiderata are, how effective various interventions are for promoting desiderata, which unknown/uninvestigated opportunities are great, and what the opportunities will be like in the future and how to prepare. Grantmakers are largely prioritization researchers, and some parameters in your prioritization-model are crucial but unstable.
If you'll have a high-uncertainty opportunity to spend $10M in a year, and you can spend $1M now to resolve a lot of uncertainty, that might be great.
Obviously prioritizing well is crucial. The great opportunities are many times better than the mediocre opportunities, even on the margin. Almost all of my donation-savvy friends regret their past donations (until recently); if they're well-informed about great donation opportunities now but weren't in the past, their donations now are many times better. If you're pretty uninformed and you'll get more information in the future, the value of waiting for information is generally greater than the value of donating sooner. (But sometimes spending money is a great way for the whole ecosystem to get more information.)
Optionality is very good, if you'll have more information in the future.
Steering projects
Sometimes steering projects is important. You are not limited to deciding whether to fund a project. If you have good views on what a project should do, sometimes you should get the project to follow those views. You can make it a condition of the grant, you can just make your views clear in your grantmaker capacity (projects try to make their funders happy), or you can just share takes as an expert on what projects in this domain would be great and miscellaneous considerations in this domain.
But obviously when you're wrong you'll destroy a bunch of value. And you'll destroy value when people defer to you more than you want, especially if they might misunderstand your views.
And obviously it's costly if steering a project requires lots of work — your job should probably mostly be finding/creating amazing projects, not steering various good projects.
Steering power is limited. Fear theories of change that route through "empower this sketchy person and hope they do good things."
Counterfactuality & funging
It's important to understand counterfactuality and funging, especially if there are other grantmakers/donors in the space and you're not fully aligned with them. But the naive consequentialist upshot—that you should try to be a donor-of-last-resort so that you never fund something if someone else would instead—is generally uncooperative and bad. I don't know how grantmakers/donors should coordinate on sharing costs; it's messy. Fortunately often it's clear who's responsible for funding something, e.g. because different actors have different niches.
Matching pledges are usually deceptive, but matching can be a fine way for grantmakers/donors to coordinate on sharing costs.
More
Notes
Note: I subscribe to BOTEC maximalism: I put numbers on things whenever possible and those numbers are pretty load-bearing. As far as I know, nobody outside my team does that. I think most people are correct not to do it. It works great for us, especially for comparing interventions that target different desiderata, e.g. "make the US government better on AI safety" vs "make technical AI safety research happen." But it only works because we're good at quantifying the value (for the long-term future) of many (AI safety, better futures, politics, etc.) desiderata and interventions (and we can share state and resolve disagreements — it would be worse for large teams). For most people—even many math-y people—their BOTECs are often terrible, much worse than mere intuition. Sometimes it's crucial to assess value in abstract units, especially for comparing different kinds of interventions. But it mostly seems fine if you're like "here are some different things that are similarly good (and how they compare to our bar)" and then just compare new stuff to those things.
Note: many of these takes are a priori observations. You shouldn't update as if these are all based on real-world experience.
Grantmaking reading recommendations
The best thing is Linch's Some unfun lessons I learned as a junior grantmaker (which loosely inspired this post's title). After that, consider (these all happen to be from CG):
If you have reading recommendations, please share! I asked various grantmakers and they didn't really have others.
This post is the beginning of my sequence inspired by my prioritization research and donation advising work.
You counterfactually generated $9M of value. The people/orgs that actually do the project, if relevant, are also counterfactual for that value, but that's fine; counterfactuals don't sum to the total. The donor generated $1M of value. I assume your 10x judgment is after accounting for the opportunity cost of people/orgs, if relevant — the value you generate is the value of the project minus the opportunity cost of the people/orgs and the money required.