Review

Summary: Plenty of highly committed altruists are pouring into AI safety. But often they are not well-funded, and the donors who want to support people like them often lack the network and expertise to make confident decisions. The GiveWiki aggregates the donations of currently 220 donors to 88 projects – almost all of them projects fully in AI safety (e.g., Apart Research) or projects that also work on AI safety (e.g., Pour Demain). It uses this aggregation to determine which projects are the most widely trusted among the donors with the strongest donation track records. It is a reflection of expert judgment in the field. It can serve as a guide for non-expert donors. Our current top three projects are FAR AI, the Simon Institute for Longterm Governance, and the Alignment Research Center.

The symbolism is left as an exercise for the reader to decipher.

Introduction

Throughout the year, we’ve been hard at work to scrape together all the donation data we could get. One big source has been Vipul Naik’s excellent repository of public donation data. We also imported public grant data from Open Phil, the EA Funds, the Survival and Flourishing Fund, and a certain defunct entity. Additionally, 36 donors have entered their donation track records themselves (or sent them to me for importing).

Add some retrospective evaluations, and you get a ranking of 92 top donors (who have donor scores > 0), of whom 22 are listed publicly, and a ranking of 33 projects with support scores > 0 (after rounding to integers).

(The donor score is a measure of the track record of a donor, and the support score is a measure of the support that a project has received from donors, weighed by the donor score among other factors. So the support score is the aggregate measure of the trust of the donors with the strongest donation track records.)

The Current Top Recommendations

  1. FAR AI
    1. “FAR AI’s mission is to ensure AI systems are trustworthy and beneficial to society. We incubate and accelerate research agendas that are too resource-intensive for academia but not yet ready for commercialisation by industry. Our research spans work on adversarial robustness, interpretability and preference learning.”
  2. Simon Institute for Longterm Governance
    1. “Based in Geneva, Switzerland, the Simon Institute for Longterm Governance (SI) works to mitigate global catastrophic risks, building on Herbert Simon's vision of future-oriented policymaking. With a focus on fostering international cooperation, the organisation centres its efforts on the multilateral system.”
  3. Alignment Research Center
    1. “ARC is a non-profit research organization whose mission is to align future machine learning systems with human interests.”
    2. Note that the project is the Theory Project in particular, but some of the donations that underpin the high support score were made before there was a separation between the Theory and the Evals project, so this is best interpreted as a recommendation of ARC as a whole.

You can find the full ranking on the GiveWiki projects page.

The top projects sorted by their support scores.

Limitations:

  1. These three recommendations are currently heavily influenced by fund grants. They basically indicate that these projects are popular among EA-aligned grantmakers. If you’re looking for more “unorthodox” giving opportunities, consider Pour Demain, the Center for Reducing Suffering, or the Center on Long-Term Risk, which have achieved almost as high support scores (209 instead of 212–213) with minority or no help from professional EA grantmakers. (CLR has been supported by Open Phil and Jaan Tallinn, but their influences are only 29% and 10% respectively in our scoring. The remaining influence is split among 7 donors.)
  2. A full 22 projects are clustered together at the top of our ranking with support scores in the range of 186–213. (See the bar chart above of the top 34 projects.) So the 10th project is probably hardly worse than the 1st. I think this is plausible: If there were great differences between the top projects I would be quite suspicious of the results because AI safety is rife with uncertainties that I expect make it hard to be very confident in recommendations of particular projects or approaches over others.
  3. Our core audience is people who are not intimately familiar with the who-is-who of AI safety. We try to impartially aggregate all opinions to average out any extreme ones that individual donors might have. But if you are intimately familiar with some AI safety experts and trust them more than others, you can check whether they are among our top donors, and if so see their donations on their profiles. If not, please invite them to join the platform. They can contact us to import their donations in bulk.
  4. If projects have low scores on the platform there is still a good chance that that is not deserved. So far the majority of our data is from public sources and only 36 people have imported their donation track records. The public sources are biased toward fund grants and donations to well-known organizations. We’re constantly seeking more “project scouts” who want to import their donations and regrant through the platform to diversify the set of opinions that it aggregates. If you’re interested in that, please get in touch! Over 50 donors with a total donation budget of over $700,000 want to follow the platform’s recommendations, so your data can be invaluable for the charities you love most.
  5. It’s currently difficult for us to take funding gaps into account because we have nowhere near complete data on the donations that projects receive. Please make sure that the project you want to support is really fundraising. Next year, we want to address this with a solution where projects have to enter and periodically update their fundraising goals to be shown in the ranking.

We hope that our data will empower you to find new giving opportunities and make great donations to AI safety this year!

If it did, please register your donation and select “Our top project ranking” under “recommender,” so that we can track our impact.

New Comment
10 comments, sorted by Click to highlight new comments since:

We see a massive drop in score from the 22nd to the 23rd project. Can you explain why this is occurring?

TL;DR: Great question! I think it mostly means that we don't have enough data to say much about these projects. So donors who've made early donations to them, can register them and boost their project score.

  1. The donor score relies on the size of the donations and their earliness in the history of the project (plus the retroactive evaluation). So the top donors in particular have made many early, big, and sometimes public grants to projects that panned out well – hence why they are top donors.
  2. What influences the support score is not the donor score itself but the inverse rank of the donor in the ranking that is ordered by the donor score. (This corrects the outsized influence that rich donors would otherwise have, since I assume that wealth is Pareto distributed but expertise is maybe not, and is probably not correlated at quite that extreme level.)
  3. But if a single donor has more than 90% influence on the score of a project, they are ignored, because that typically means that we don't have enough data to score the project. We don't want a single donor to wield so much power.

Taken together, our top donors have (by design) the greatest influence over project scores, but they are also at a greater risk of ending up with > 90% influence over the project score, especially if the project has so far not found many other donors who've been ready to register their donations. So the contributions of top donors are also at greater risk of being ignored until more donors confirm the top donors' donation decisions.

Ok so the support score is influenced non-linearly by donor score. Is there a particular donor that has donated to the highest ranked 22 projects, that did not donate to the 23 or lower ranked projects? 

I have graphed donor score vs rank for the top GiveWiki donors. Does this include all donors in the calculation or are there hidden donors?

Does this include all donors in the calculation or are there hidden donors?

Donors have a switch in their profiles where they can determine whether they want to be listed or not. The top three in the private, complete listing are Jaan Tallinn, Open Phil, and the late Future Fund, whose public grants I've imported. The total ranking lists 92 users. 

But I don't think that's core to understanding the step down. I've gone through the projects around the threshold before I posted my last comment, and I think it's really the 90% cutoff that causes it. Not a big donor who has donated to the first 22 but not to the rest.

There are plenty of projects in the tail that have also received donations from a single donor with a high score – but more or less only that so that said donor has > 90% influence over the project and will be ignored until more donors register donations to it.

Ok so the support score is influenced non-linearly by donor score.

By the inverse rank in the ranking that is sorted by the score. So the difference between the top top donor and the 2nd top donor is 1 in terms of the influence they have.

Meta question: is the above picture too big?

It displays well for me!

It's worth specifying "AI GiveWiki" in the title. This seems to be recommendations GIVEN a decision that AI safety is the target.

It says “AI Safety” later in the title. Do you think I should mention it earlier, like “The AI Safety GiveWiki's Top Picks for the Giving Season of 2023”?

Unsure.  It's probably reasonable to assume around here that it's all AI safety all the time.  "GiveWiki" as the authority for the picker, to me, implied that this was from a broader universe of giving, and this was the AI Safety subset.  No biggie, but I'm sad there isn't more discussion about donations to AI safety research vs more prosaic suffering-reduction in the short term.

"GiveWiki" as the authority for the picker, to me, implied that this was from a broader universe of giving, and this was the AI Safety subset.

Could be… That's not so wrong either. We rather artificially limited it to AI safety for the moment to have a smaller, more sharply defined target audience. It also had the advantage that we could recruit our evaluators from our own networks. But ideally I'd like to find owners for other cause areas too and then widen the focus of GiveWiki accordingly. The other cause area where I have a relevant network is animal rights, but we already have ACE there, so GiveWiki wouldn't add so much on the margin. One person is interested in potentially either finding someone or themselves taken responsibility for an global coordination/peace-building branch, but they probably won't have the time. That would be excellent though!

No biggie, but I'm sad there isn't more discussion about donations to AI safety research vs more prosaic suffering-reduction in the short term.

Indeed! Rethink Priorities has made some progress on that. I need to dig into the specifics more to see whether I need to update on it. The particular parameters that they discuss in the article have not been so relevant to my reasoning on these parameters, but it's well possible that animal rights wins out even more clearly on the basis of the parameters that I've been using.