Previously: Long-Term Charities: Apply For SFF Funding, Zvi’s Thoughts on SFF

There are lots of great charitable giving opportunities out there right now.

I recently had the opportunity to be a recommender in the Survival and Flourishing Fund for the second time. As a recommender, you evaluate the charities that apply and decide how worthwhile you think it would be to donate to each of them according to Jaan Tallinn’s charitable goals, and this is used to help distribute millions in donations from Jaan Tallinn and others.

The first time that I served as a recommender in the Survival and Flourishing Fund (SFF) was back in 2021. I wrote in detail about my experiences then. At the time, I did not see many great opportunities, and was able to give out as much money as I found good places to do so.

How the world has changed in three years.

This time I found an embarrassment of riches. Application quality was consistently higher, there were more than twice as many applications, and essentially everyone is looking to scale their operations and their spending.

Thus, this year there will be two posts.

This post contrasts between this experience and my first experience in 2021.

The other post will be an extensive list of charities that I believe should be considered for future donations, based on everything I know, including the information I gathered at SFF – if and only if your priorities and views line up with what they offer.

It will be a purely positive post, in that if I don’t have sufficiently net helpful things to say about a given charity, or I believe they wouldn’t want to be listed, I simply won’t say anything. I’ve tried to already reach out to everyone involved, but: If your charity was in SFF this round, and you either would prefer not to be in the post or you have new information we should consider or share, please contact me this week.

This first post will contain a summary of the process and stand on its own, but centrally it is a delta of my experiences versus those in 2021.

Table of Contents

  1. How the S-Process Works in 2024.
  2. Quickly, There’s No Time.
  3. The Speculation Grant Filter.
  4. Hits Based Giving and Measuring Success.
  5. Fair Compensation.
  6. Carpe Diem.
  7. Our Little Corner of the World.
  8. Well Well Well, If It Isn’t the Consequences of My Own Actions.
  9. A Man’s Reach Should Exceed His Grasp.
  10. Conclusion.

How the S-Process Works in 2024

Note that the speculation grant steps were not present in 2021.

  1. Organizations fill out an application.
  2. That application is sent to a group of speculation granters.
  3. The speculation granters can choose to grant them money. If they do, that money is sent out right away, since it is often time sensitive.
  4. All applications that get $10k or more in speculation grants proceed to the round. Recommenders can also consider applications that didn’t get a speculation grant or that came in late, but they don’t have to.
  5. The round had 12 recommenders: 6 main track, 3 fairness and 3 freedom.
  6. You have 3-4 meetings for 3 hours each to discuss the process and applications with members of your track and a few people running the process.
  7. Before and between these meetings you read applications, investigate as you deem appropriate including conducting interviews, evaluate the value of marginal dollars being allocated to different places, and adjust those ratings.
  8. Jaan Tallinn and other funders decide which recommenders will allocate how much money from the round.
  9. The money is allocated by cycling through recommenders. Each gives their next $1k to the highest value application, based on their evaluations and where money has gone thus far, until each recommender is out of cash to give. Thus everyone’s top priorities always get funded, and what mostly matters is finding a champion or champions that value you highly.
  10. If money is given to organizations that already got speculation grants, they only get additional funds to the extent the new amount exceeds the speculation grants.
  11. Feedback is given to the organizations and the money is announced and distributed. Speculation granters get further funds based on how those in the main round evaluated their speculation grants – if the main round recommenders thought the grant was high value, you get your money back or even more.

Or:

  1. Jaan Tallinn chooses recommenders and has a slate of speculation granters.
  2. Organizations apply for funding.
  3. Speculation granters evaluate applications and perhaps give money.
  4. Recommenders evaluate applications that got money from speculation grants.
  5. Recommenders create evaluation functions for how much dollars are worth on different margins to different organizations, discuss, and adjust.
  6. Jaan Tallinn and other funders set who gets to give away how much money.
  7. System allocates funds by having recommenders take turns allocating $1k to the highest value target left on their board.
  8. Money is donated.
  9. Hopefully good things.
  10. Speculation grant funds are replenished if recommenders liked the choices made.

Quickly, There’s No Time

You are given well over 100 charities to evaluate, excluding those that did not get a speculation grant, and you could yourself recruit others to apply as I did in 2021. There are several times more charities than there were last round in 2021 when time was already tight, and average quality has gone up, but your time available is the same as before. On the order of $1 million is on the line from your decisions alone.

Claude helped, but it only helps so much.

I assume most everyone spent substantially more time on this than the amount we committed to spending. That was still not remotely enough time. You are playing blitz chess, whether you like it or not. You can do a few deep dives, but you have to choose where and when and how to focus on what matters. We all did our best.

For the majority of organizations, I read the application once, selectively looked at links and documents offered, decided the chance they would hit my bar for funding was very low given my other options, and then evaluated them as best I could based on that since the process has one recommender who is the average of everyone’s rankings, so your evaluations all matter. And that pretty much had to be it.

For others, I did progressively more diligence, including emailing contacts who could provide diligence, and for a number of organizations I had a phone call to ask questions. But even in the best case, we are mostly talking 30-60 minutes on the phone, and very few opportunities to spend more than that off the phone, plus time spent in the group discussions.

The combination of tons of good options and no time meant that, while I did rank everyone and put most organizations in the middle, if an organization did not quickly have a reason it rose to the level of ‘shut up and take my money’ then I didn’t spend too much more time on it, because I knew I wasn’t even going to get through funding the ‘shut up and take my money’ level of quality.

When did I have the most confidence? When I had a single, very hard to fake signal – someone I trusted either on the project or vouching for it, or a big accomplishment or excellent work that I could verify, in many cases personally.

Does this create an insiders versus outsiders problem? Oh, hell yes. I don’t know what to do about that under this kind of structure – I tried to debias this as much as I could, but I knew it probably wasn’t enough.

Outsiders should still be applying, the cost-benefit ratio is still off the charts, but to all those who had great projects but where I couldn’t get there without doing more investigation than I had time for, but might have gotten there with more time, I’m sorry.

The Speculation Grant Filter

There is now a speculation grant requirement in order to be considered in a funding round. Unless you get at least $10k in speculation grant money, you aren’t considered in the main round, unless someone actively requests that.

That greatly raised the average quality of applications in the main round. As a speculation granter, you can see that it is a strong quality filter. This was one reason quality was higher, and the multiplier on number of charities worth considering was very large.

Hits Based Giving and Measuring Success

Another huge problem continues to be getting people to take enough risks, and invest in enough blue sky research and efforts. A lot of the best investments out there have very long tailed payoffs, if you think in terms of full outcomes rather than buying probabilities of outcomes. It’s hard to back a bunch of abstract math on the chance it is world changing, or policy efforts that seem in way over their heads but that just might work.

The problem goes double when you’re looking at track records periodically as organizations seek more funding. There’s a constant pressure to Justify Your Existence, and not that much reward for an outsized success because a lot of things get ‘fully funded’ in that sense.

A proposed solution is retroactive funding, rewarding people post-hoc for big wins, but enthusiasm for doing this at the necessary scale has overall been quite poor.

Fair Compensation

Others paid a lot of attention to salaries, worried they might be too high, or generally to expenses. This makes obvious sense, since why buy one effort if you can get two similar ones for the same price?

But also in general I worry more that non-profit salaries and budgets are too low not too high, and are not able to either attract the best talent or give the best talent full productivity – they’re forced to watch expenses too much. It’s a weird balance to have to strike.

This is especially true in charities working on AI. The opportunity cost for most involved is very high, because they could instead be working on AI companies. If those involved cared primarily about money, they wouldn’t be there, but people do care and need to not take too large a hit.

A great recent example was Gwern. On the Dwarkesh Podcast, a bunch of people learned Gwern has been living absurdly cheaply, sacrificing a lot of productivity. Luckily in that case once the situation was clear support seemed to follow quickly.

I also strove to pay less attention than others to questions of ‘what was fair’ for SFF to fund in a given spot, or who else had said yes or no to funding. At some point, you care anyway, though. You do have to use decision theory, especially with other major funders stepping back from entire areas.

Carpe Diem

Back in 2021, time did not feel short. If there were not good enough opportunities, I felt comfortable waiting for a future date, even if I wasn’t in position to direct the decisions involved.

Now in 2024, it feels like time is short. AGI and then ASI could arrive scarily soon. Even if they do not, the regulatory path we go down regarding AI will soon largely be set, many technical paths will be set, and AI will change many things in other ways. Events will accelerate. If you’re allocating for charitable projects in related spaces, I think your discount rate is much higher now than it was three years ago, and you should spend much more aggressively.

Our Little Corner of the World

A distinct feature this round was the addition of the Fairness and Freedom tracks. I was part of the Freedom track, and instructed to put a greater emphasis on issues of freedom, especially as it interplays with AI, as there was worry that the typical process did not give enough wait to those considerations.

The problem was that this isolated the three members of the Freedom track from everyone else. So I only got to share my thoughts and compare notes with two other recommenders. And there were a lot of applications. It made it hard to do proper division of labor.

It also raised the question of what it meant in this context to promote freedom. You can’t have freedom if you are dead. But that issue wasn’t neglected. Do forms of technical work lead us down more pro-freedom paths than others? If so which ones?

Many forms of what looks like freedom can also end up being anti-freedom by locking us into the wrong paths and taking away our most freedom-preserving options. Promoting the freedom to enable bad actors or anti-freedom rivals now can be a very anti-freedom move. Failure to police now could require or cause worse policing later.

How should we think about freedom as it relates to ‘beating China’? The CCP is very bad for freedom, so does pro-freedom mean winning? Ensuring good light touch regulations now and giving us the ability to respond carefully rather than with brute force can be very pro-freedom by heading off alternatives.

The most pro-freedom thing in the last five years was Operation Warp Speed.

Everything is complicated.

In our first session I asked, how much should we emphasize the freedom aspect of applications? The answer was some, but not to the exclusion of other factors. And there were not that many applications that had strong freedom arguments. So I still looked at all the applications, and I still allocated to what seemed like the best causes even if they weren’t directly linked to freedom, but I did substantially elevate my ranking of the more directly and explicitly freedom-focused applications, and ensured that this impacted the ultimate funding decisions.

My biggest top-level cause prioritization decision was to strongly downweight anything meta or any form of talent funnel, based on a combination of the ecosystems seeming funding constrained and time constrained, and because I expected others to prioritize those options highly.

I did not similarly throw out research agendas with relatively long timelines to impact, especially Agent Foundations style alignment approaches, because I do have uncertainty over timelines and pathways and I think the expected marginal value there remains very large, but placed less emphasis on that than I did three years ago.

Well Well Well, If It Isn’t the Consequences of My Own Actions

Last time I extensively discussed the incentives the S-process gives to organizations. I especially noted that the process rewards asking for large amounts of money, and telling a legible story that links you to credible sources without associated downside risks.

This time around, I saw a lot of applications that asked for a lot of money, often far more than they had ever spent in the past, and who strove to tell a legible story that linked them to credible sources without associated downside risks.

I do not regret my statements. It did mean I had to adjust on all those fronts. I had to watch for people gaming the system in these ways.

In particular, I did a deliberate pass where I adjusted for whether I thought people’s requests were reasonably sized given their context. I tried to reward rather than punishing modest asks, and not reward aggressive asks.

I especially was sure to adjust based on who asked for partial funding versus full funding, and who asked for funding for shorter versus longer periods of time, and who was projecting or asking for growth faster than is typically wise.

There was a key adjustment to how the calculations go, that made it much easier to adjust for these issues. In the past, we had only first dollar value, last dollar amount and a concavity function. Now, we were asked to evaluate dollar values as a set of linear equations. This made it easy to say things like ‘I think Acme should get $100k with very high priority, but we should put little or no value on more than that,’ whereas in the past that was hard to do, and Acme asking for $500k almost had to make it easier to get the first $100k.

Now, we had more freedom to get that right. My guess is that in expected value terms asking for more money is correct on the margin, but not like before, and it definitely actively backfired with at least one recommender.

A Man’s Reach Should Exceed His Grasp

There were a number of good organizations that were seeking far more funding than the entire budget of an individual recommender. In several cases, they were asking for a large percentage of the entire combined round.

I deliberately asked, which organizations are relatively illegible and hard to fund for the rest of the ecosystem? I did my best to upweight those. Versus those that should have a strong general story to tell elsewhere, especially if they were trying to raise big, where I downweighted the value of large contributions. I still did place a bunch of value in giving them small contributions, to show endorsement.

The best example of this was probably METR. They do great work in providing frontier model evaluations, but everyone knows they do great work including outside of tradition existential risk funding sources, and their budget is rapidly getting larger than SFF’s. So I think it’s great to find them more money, but I wanted to save my powder for places where finding a substitute would be much harder.

Another example would be MIRI, of Eliezer Yudkowsky fame. I am confident that those involved should be supported in doing and advocating for whatever they think is best, but their needs exceeded my budget and the cause is at this point highly legible.

Thus, if you are looking to go big and want to be confident you have made a solid choice to help prevent existential risks from AI (or from biological threats, or in one case nuclear war) that can absorb large amounts of funding, you have many good choices.

Conclusion

If this seems like an incomplete collection of thoughts, it is again because I don’t want to be restating things too much from my previous overview of the S-process.

There were a lot of worthwhile individual charities that applied to this round, including many that ultimately were not funded.

Again, there will a second post next week that goes over individual charities. If your charity was in SFF and you either actively do not wish to be included, or have new information on your situation (including major changes in funding needs), you can reach out to me, including at LessWrong or Twitter.

 

 

New Comment