I would wire you guys 300-400K today if I wasn't still worried about the theory that 'AI Safety is actually a front for funding advancement of AI capabilities'. It is a quixotic task to figure out how true that theory is or what actually happened in the past, neverminded why. But the theory seems at least kind of true to me and so I will not be donating.
Its unlikely to be worth your time to try to convince me to donate. But maybe other potential donors would appreciate a reassurance its not actively net-negative to donate. For example several people mentioned in the post have ties to dangerous organizations such as Anthropic.
Meta-honesty: There is not enough values alignment to trust me with sensitive information and I definitely do not endorse 'always keep secrets you agreed to keep'. I support leaking the pentagon papers, etc.
My own professional opinion, not speaking for any other grantmakers or giving an institutional view for LTFF etc:
Yeah I sure can't convince you that donating to us is definitely net positive, because such a claim wouldn't be true.
So basically I don't think it's possible to do robustly positive actions in longtermism with high (>70%? >60%?) probability of being net positive for the long-term future[1], and this number is even lower for people who don't place the majority of their credence on near- to medium-term extinction risk timelines.
I don't think this is just an abstract theoretical risk, as you mention there's a real risk that our projects are net negative; and advancing more AI capabilities than AI safety is the most obvious way that this is true.
I think the other LTFF grantmakers and I are pretty conscious about downside risks in capabilities enhancements, though I expect there's a range of opinions on the fund on how much to weigh that against other desiderata, as well as which specific projects have the highest capabilities externalities.
I would guess that we're better about this than most (all?) other significant longtermist funders, including both organizations and individuals (though keep in mind that the average for individuals is driven by the long left tail). But since we're optimizing for other things as well (most importantly positive impact), I think we'd do worse than you would on this axis if you a) have reasonably good judgment b) are laser-focused on preventing capabilities externalities, and c) have access to good donation options directly, especially by your own worldview. And of course reality doesn't grade on a curve, so doing better than other funders isn't a guarantee we're doing well enough.
I don't do much evaluations of alignment grants myself because others on the fund seem more technically qualified so my time is usually triaged to looking at other projects (eg forecasting, biosecurity). But I do try to flag downside risks I see in ltff grants overall, including in alignment grants. (So far, I think the rest of the fund is sensible about capabilities risks and capabilities risks usually aren't the type of thing that non-public information is super useful for, so possibly none of my flags were on capabilities, more like interpersonal harm or professional integrity). When I did, I've found the rest of the fund to be sensible about them. You might find this recent post to be useful.
(On the flip side, there were a small number of grants that I liked that we were blocked from making for legal or PR reasons; for the most promising ones, one of us tried to connect the applicant to other funders)
If I were to hypothesize why LessWrongers should be worried about our capabilities externalities:
I also think potential donors to us can also just look at our past grants database, our payout report, or our marginal grants post to make an informed decision for themselves about whether donations to us are (sufficiently) net positive in expectation.
On a personal level:
I don't really know, man? I think the longtermist/rationalist EA memes/ecosystem were very likely causally responsible for some of the worst capabilities externalities in the last decade; I don't have a sense of how bad it is overall because counterfactuals are really hard but I don't think it's plausible that the negative impact was small. I'm pretty confused about whether people with thought process like me have been historically net positive or net negative; I can see a strong case either way. The whole thing had a pretty direct effect on me being depressed for most of this year (with the obvious caveat that etiology is hard for mental illness stuff, and being sad for cosmic reasons is one of the most self-flattering stories I could have for melancholy). Interestingly, I think the emotional effect is much larger than I would've ex ante predicted, if you asked me in 2017 if I thought longtermist work might be net negative, I don't think my numbers would've been that different; I guess the specific details and concreteness did matter.
I have a lot of sympathy for people who decided to be a bit more checked out of morality, or decided to give up on this whole AI thing and focus on just reducing suffering in the next few decades (I think farmed animal welfare is the most popular candidate). But ultimately I think they're wrong. The future is still going to be big, and likely really wild, and likely at least somewhat contingent. Knowing (or at least having a high probability) that people near us did a bunch of harmful stuff in the past is certainly an argument for being much more careful going forwards (as well as a number of more concrete and specific updates), but not really a good case to just roll over. (In the abstract, I do think it's more plausible that for some people acting now is wrong compared to retreating to the woods for a year and thinking really hard; as an empirical matter when I did weaker versions of that, the effect was basically between useless and negative).
I think it's a bit more feasible if you're willing to make >3 OOMs sacrifice in expected positive impact. But still pretty rough. Some green energy stuff might be safe? Maybe try to convince doomsday preppers to be nicer people? I confess to not thinking much about it; I think some of the Oxford people might have a better idea.
I think the longtermist/rationalist EA memes/ecosystem were very likely causally responsible for some of the worst capabilities externalities in the last decade;
If you're thinking of the work I'm thinking of, I think about zero of it came from people aiming at safety work and producing externalities, and instead about all of it was people in the community directly working on capabilities or capabilities-adjacent projects, with some justification or the other.
(personal opinions)
Yeah most of the things I'm thinking of didn't look like technical safety stuff, more like Demis and Shane being concerned about safety -> decided to found Deepmind, Eliezer introducing Demis and Shane to Peter Thiel ( their first funder), etc.
In terms of technical safety stuff, sign confusion around RLHF is probably the strongest candidate. I'm also a bit worried about capabilities externalities of Constitutional AI, for similar reasons. There's also the general vibes issue of safety work (including quite technical work) and communications either making AI capabilities seem more cool* or seem less evil (depending on your framing).
EDIT to add: I feel like in Silicon Valley (and maybe elsewhere but I'm most familiar with Silicon Valley) there's a certain vibe of coolness being more important than goodness, which feels childish to me but afaict seems like a real thing. This Altman tweet seems emblematic of that mindset.
I feel like in Silicon Valley (and maybe elsewhere but I'm most familiar with Silicon Valley) there's a certain vibe of coolness being more important than goodness
Yeah, I definitely think this is true to some extent. "First get impact, then worry about the sign later" and all.
So basically I don't think it's possible to do robustly positive actions in longtermism with high (>70%? >60%?) probability of being net positive for the long-term future
This seems like an important point, and it's one I've not heard before. (At least, not outside of cluelessness or specific concerns around AI safety speeding up capabilities; I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future.)
I'm super interested in how you might have arrived at this belief: would you be able to elaborate a little? For instance, is there a theoretical argument going on here, like a weak form of cluelessness? Or is it more empirical, for example, did you get here through evaluating a bunch of grants and noticing that even the best seem to carry 30-ish percent downside risk? Something else?
I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future).
Really? Without giving away names, can you tell me roughly what cluster they are in? Geographical area, age range, roughly what vocation (technical AI safety/AI policy/biosecurity/community building/earning-to-give)?
I'm super interested in how you might have arrived at this belief: would you be able to elaborate a little? For instance, is there a theoretical argument going on here, like a weak form of cluelessness? Or is it more empirical,
Definitely closer to the former than the latter! Here are some steps in my thought process:
I'm also interested in thoughts from other people here; I'm sure I'm not the only person who is worried about this type of thing.
(Also please don't buy my exact probabilities. They are very much not resilient. Like I'm pretty sure if I thought about it for 10 years (without new empirical information) the probability can't be much higher than 90%, and I'm pretty sure the probabilities are high enough to be non-Pascalian, so not as low as say 50% + 1-in-a-quadrallion, but anywhere in between seems kinda defensible).
"I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future"
Fwiw, I think this is probably true for very few if any of the EAs I've worked with, though that's a biased sample.
I wonder if the thing giving you this vibe might be they they actually think something like "I'm not that confident that my work is net positive for the LTF but my best guess is that it's net positive in expectation. If what I'm doing is not positive, there's no cheap way for me to figure it out, so I am confident (though not ~100%) that my work will keep seeming positive EV to me for the near future." One informal way to describe this is that they are confident that their work is net positive in expectation/ex ante but not that it will be net positive ex post
I think this can look a lot like somebody being ~sure that what they're doing is net positive even if in fact they are pretty uncertain.
I'm super interested in how you might have arrived at this belief: would you be able to elaborate a little?
One way I think about this is there are just so many weird (positive and negative) feedback loops and indirect effects, so it's really hard to know if any particular action is good or bad. Let's say you fund a promising-seeming area of alignment research – just off the top of my head, here are several ways that grant could backfire:
• the research appears promising but turns out not to be, but in the meantime it wastes the time of other alignment researchers who otherwise would've gone into other areas
• the research area is promising in general, but the particular framing used by the researcher you funded is confusing, and that leads to slower progress than counterfactually
• the researcher you funded (unbeknownst to you) turns out to be toxic or otherwise have bad judgment, and by funding him, you counterfactually poison the well on this line of research
• the area you fund sees progress and grows, which counterfactually sucks up lots of longtermist money that otherwise would have been invested and had greater effect (say, during crunch time)
• the research is somewhat safety-enhancing, to the point that labs (facing safety-capabilities tradeoffs) decide to push capabilities further than they otherwise would, and safety is hurt on net
• the research is somewhat safety-enhancing, to the point that it prevents a warning shot, and that warning shot would have been the spark that would have inspired humanity to get its game together regarding combatting AI X-risk
• the research advances capabilities, either directly or indirectly
• the research is exciting and draws the attention of other researchers into the field, but one of those researchers happens to have a huge, tail negative effect on the field outweighing all the other benefits (say, that particular researcher has a very extreme version of one of the above bullet points)
• Etcetera – I feel like I could do this all day.
Some of the above are more likely than others, but there are just so many different possible ways that any particular intervention could wind up being net negative (and also, by the same token, could alternatively have indirect positive effects that are similarly large and hard to predict).
Having said that, it seems to me that on the whole, we're probably better off if we're funding promising-seeming alignment research (for example), and grant applications should be evaluated within that context. On the specific question of safety-conscious work leading to faster capabilities gains, insofar as we view AI as a race between safety and capabilities, it seems to me that if we never advanced alignment research, capabilities would be almost sure to win the race, and while safety research might bring about misaligned AGI somewhat sooner than it otherwise would occur, I have a hard time seeing how it would predictably increase the chances of misaligned AGI eventually being created.
I'm not sure which of the people "have ties to dangerous organizations such as Anthropic" in the post (besides Shauna Kravec & Nova DasSarma, who work at Anthropic), but of the current fund managers, I suspect that I have the most direct ties to Anthropic and OAI through my work at ARC Evals. I also have done a plurality of grant evaluations in AI Safety in the last month. So I think I should respond to this comment with my thoughts.
I personally empathize significantly with the concerns raised by Linch and Oli. In fact, when I was debating joining Evals last November, my main reservations centered around direct capabilities externalities and safety washing.
I will say the following facts about AI Safety advancing capabilities:
For what it's worth, I think that if we are to actually produce good independent alignment research, we need to fund it, and LTFF is basically the only funder in this space. My current guess is a lack of LTFF funding is probably producing more researchers at Anthropic than otherwise, because there just that aren't many opportunities for people to work on safety or safety-adjacent roles. E.g. I know of people who are interviewing for Anthropic capability teams because idk man, they just want a safety-adjacent job with a minimal amount of security, and it's what's available. Having spoken to a bunch of people, I strongly suspect that of the people that I'd want to fund but won't be funded, at least a good fraction are significantly less likely to join a scaling lab if they were funded, and not more.
(Another possibly helpful datapoint here is that I received an offer from Anthropic last december, and I turned them down.)
My current guess is a lack of LTFF funding is probably producing more researchers at Anthropic than otherwise, because there just that aren't many opportunities for people to work on safety or safety-adjacent roles. E.g. I know of people who are interviewing for Anthropic capability teams because idk man, they just want a safety-adjacent job with a minimal amount of security, and it's what's available. Having spoken to a bunch of people, I strongly suspect that of the people that I'd want to fund but won't be funded, at least a good fraction are significantly less likely to join a scaling lab if they were funded, and not more.
I think this is true at the current margin, because we have so limited money.. But if we receive say enough funding to lower the bar to closer to what our early 2023 bar was, I will still want to make skill-up grants to fairly talented/promising people, and I still think they are quite cost-effective. I do expect those grants to have more capabilities externalities (at least in terms of likelihood, maybe in expectation as well) than when we give grants to people who currently could be hired at (eg) Anthropic but choose not to.
It's possible you (and maybe Oli?) disagree and think we should fund moderate-to-good direct work projects over all (or almost all) skillup grants; in that case this is a substantive disagreement about what we should do in the future.
E.g. I know of people who are interviewing for Anthropic capability teams because idk man, they just want a safety-adjacent job with a minimal amount of security, and it's what's available
That feels concerning. Are there any obvious things that would help with this situation, eg: better career planning and reflection resources for people in this situation, AI safety folks being more clear about what they see as the value/disvalue of working in those types of capability roles?
Seems weird for someone to explicitly want a "safety-adjacent" job unless there are weird social dynamics encouraging people to do that even when there isn't positive impact to be had from such a job.
FWIW, I am also very worried about this and it feels pretty plausible to me. I don't have any great reassurances, besides me thinking about this a lot and trying somewhat hard to counteract it in my own grant evaluations, but I only do a small minority of grant evaluations on the LTFF these days.
I do want to clarify that I think it's unlikely that AI Safety is a front for advancing AI capabilities. I think the framing that's more plausibly true is that AI Safety is a memespace that has undergone regulatory capture by capability companies and people in the EA network to primarily build out their own influence over the world.
Their worldviews is of course heavily influenced by concerns about the future of humanity and how it will interact with AI, but in a way that primarily leverages symmetric weapons and does not involve much of any accountability or public reasoning about their risk models, which seem substantially skewed by the fact that people are making billions of dollars off of advances in AI capabilities, and are substantially worried that people they don't like will get to control AI.
I do also think this is just one framing, and there are a lot of other things going on.
Have you looked at Orthogonal? They're pretty damn culturally inoculated against doing-capabilities-(even-by-accident), and they're extremely funding constrained.
UPDATE 2023/09/13:
Including only money that has already landed in our bank account and extremely credible donor promises of funding, LTFF has raised ~1.1M and EAIF has raised ~500K. After Open Phil matching, this means LTFF now has ~3.3M additional funding and EAIF has ~1.5m in additional funding.
We are also aware that other large donors, including both individuals and non-OP institutional donors, are considering donating to us. In addition, while some recurring donors have likely moved up their donations to us because of our recent unusually urgent needs, it is likely that we will still accumulate some recurring donations in the coming months as well. Thus,I think at least some of the less-certain sources of funding will come through. However, I decided to conservatively not include them in the estimate above.
From my (Linch)'s perspective, this means both LTFF nor EAIF are no longer very funding constrained for the time period we wanted to raise money for (the next ~6 months), however both funds are still funding constrained and can productively make good grants with additional funding.
To be more precise, we estimated a good target spend rate for LTFF is as 1M/month, and a good target spend rate for EAIF as ~800k/month. The current funds will allow LTFF to spend ~550k/month and EAIF to spend ~250k/month, or roughly a gap of 450k/month and 550k/month, respectively. More funding is definitely helpful here, as more money will allow both funds to make productively make good grants[1].
Open Phil's matching is up to 3.5M from OP (or 1.75M from you) for each fund. This means LTFF would need ~650k more before maxing out on OP matching, and EAIF would need ~1.25M more. Given my rough estimate of funding needs above, which is ~6.2M/6 months for LTFF and ~5M/6 months for EAIF, this means LTFF would ideally like to receive 1M above the OP matching.
I appreciate donors' generosity and commitment to improving the world. I hope the money will be used wisely and cost-effectively.
I plan to write a high-level update and reflections post[2] on the EAForum (crossposted to LessWrong) after LTFF either a) reach our estimated funding target or b) decided to deprioritize fundraising, whichever one comes earlier.
I'd be happy for you guys to send some grants my way for me to fund via my Manifund pot if it'd be helpful.
I am a smaller doner (<$10k/yr) who has given to the LTFF in the past. As a data point, I would be very interested in giving to a dedicated AI Safety fund.
That's helpful feedback; if others would find donating through every.org helpful (which they can signal by agree-voting with the parent comment), I'd be happy to look into this.
I think we can be very flexible for donations over $30k, so if you're interested in making a donation of that size feel free to dm me and I am sure we can figure something out.
UPDATE 2023/09/13:
Including only money that has already landed in our bank account and extremely credible donor promises of funding, LTFF has raised ~1.1M and EAIF has raised ~500K. After Open Phil matching, this means LTFF now has ~3.3M additional funding and EAIF has ~1.5m in additional funding.
From my (Linch)'s perspective, this means both LTFF nor EAIF are no longer very funding constrained for the time period we wanted to raise money for (the next ~6 months), however both funds are still funding constrained and can productively make good grants with additional funding.
See this comment for more details.
Summary
EA Funds aims to empower thoughtful individuals and small groups to carry out altruistically impactful projects - in particular, enabling and accelerating small/medium-sized projects (with grants <$300K). We are looking to increase our level of independence from other actors within the EA and longtermist funding landscape and are seeking to raise ~$2.7M for the Long-Term Future Fund and ~$1.7M for the EA Infrastructure Fund (~$4.4M total) over the next six months.
Why donate to EA Funds? EA Funds is the largest funder of small projects in the longtermist and EA infrastructure spaces, and has had a solid operational track record of giving out hundreds of high-quality grants a year to individuals and small projects. We believe that we’re well-placed to fill the role of a significant independent grantmaker, because of a combination of our track record, our historical role in this position, and the quality of our fund managers.
Why now? We think now is an unusually good time to donate to us, as a) we have an unexpectedly large funding shortage, b) there are great projects on the margin that we can’t currently fund, and c) more stabilized funding now can give us time to try to find large individual and institutional donors to cover future funding needs.
Importantly, Open Philanthropy is no longer providing a guaranteed amount of funding to us and instead will move over to a (temporary) model of matching our funds 2:1 ($2 from them for every $1 from you, up to 3.5M from them per fund).
Where to donate: If you’re interested, you can donate to either Long-Term Future Fund (LTFF) or EA Infrastructure Fund (EAIF) here.[1]
Some relevant quotes from fund managers:
Oliver Habryka
I think the next $1.3M in donations to the LTFF (430k pre-matching) are among the best historical grant opportunities in the time that I have been active as a grantmaker. If you are undecided between donating to us right now vs. December, my sense is now is substantially better, since I expect more and larger funders to step in by then, while we have a substantial number of time-sensitive opportunities right now that will likely go unfunded.
I myself have a bunch of reservations about the LTFF and am unsure about its future trajectory, and so haven’t been fundraising publicly, and I am honestly unsure about the value of more than ~$2M, but my sense is that we have a bunch of grants in the pipeline right now that are blocked on lack of funding that I can evaluate pretty directly, and that those seem like quite solid funding opportunities to me (some of this is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality).
Lawrence Chan
“My main takeaway from [evaluating a batch of AI safety applications on LTFF] is [LTFF] could sure use an extra $2-3m in funding, I want to fund like, 1/3-1/2 of the projects I looked at.” (At the current level of funding, we’re on track to fund a much lower proportion).
Related links
Our Vision
We think there is a significant shortage of independent funders in the current longtermist and EA infrastructure landscape, resulting in fewer outstanding projects receiving funding than is good for the world. Currently, the primary source of funding for these projects is Open Philanthropy, and whilst we share a lot of common ground, we think we add value in the following ways:
Alongside the above, EA Funds has ambitions to pursue new ways of generating value by:
Our Ask
We are looking to raise ~$4.4M from the general public to support our work over the next 6 months:
This will be matched by Open Phil at a 2:1 rate ($2 from Open Phil per $1 donated to a fund) with a ceiling of a $3.5m contribution from Open Phil (per fund). You can read more about the matching here.
The EAIF and LTFF have received very generous donations from many individuals in the EA community. However, donations to the EAIF and LTFF have recently been quite low, especially relative to the quality and quantity of applications we’ve had in the last year. While much of this is likely due to the FTX crash and subsequently increased funding gaps of other longtermist organizations, our guess is that this is partially due to tech stocks and crypto doing poorly in the last year (though we hope that recent market trends will bring back some donors).
Calculation for LTFF funding gap
The LTFF has an estimated ideal dispersal rate of $1M/month, based on our post-November 2022 funding bar that Asya estimated[2] from looking at the funding gaps and marginal resources within the longtermist ecosystem overall. This is $6M over the next 6 months.
I also think LTFF donors should pay $200k over the next 6 months ($400k annualized) as their “fair share” of EA Funds operational costs. So in total, LTFF would like to spend $6.2M over the next 6 months.
Caleb estimated ~$700k in expected donations from individuals by default in the next 6 months, based solely on extrapolation from past trends. With Open Phil donation matching, this comes out to a total of $2.1M in expected incoming funds, or a shortfall of $4.1M.
To cover the remaining $4.1M, we would like individual donors to contribute an additional $2M, where Open Phil will provide $2.1M of matching for the first $1.05M.
To get a sense of what projects your marginal dollars can buy, you might find it helpful to look at the $5M tier of the LTFF Funding Thresholds Post.
Calculation for EAIF funding gap
The EAIF has an estimated ideal dispersal rate of $800k/month, based on the proportion of our historic spend rate that we believe is above Open Phil’s bar for EA community building projects (though note that this was based on fairly brief input from Open Phil and I didn’t check with them about whether they agree with this claim). This is $4.8M over the next 6 months.
I also think EAIF donors should pay $200k over the next 6 months ($400k annualized) as their “fair share” of EA Funds operational costs. So in total, EAIF would like to spend $5M over the next 6 months.
Caleb estimated $400k in expected donations from individuals by default in the next 6 months, based solely on extrapolation from past trends. With Open Phil donation matching, this comes out to a total of $1.2M in expected incoming funds, or a shortfall of $3.8M.
To cover the remaining $3.8M, we would like individual donors to contribute an additional $1.3M, where Open Phil will provide 2.5M in donation matching.
Potential change for operational expenses payment
Going forwards, we would also like to move towards a model where donors directly pay for our operational expenses (currently we fundraise for operational expenses separately, so 100% of donations from public donors goes to our grantees). We believe that the newer model is more transparent, as it lets all donors more clearly see the true costs and cost-benefit ratio for their donations. However, making the change is still pending internal discussions, community feedback, and logistical details. We will make a separate announcement if and when we switch to a model where a percentage of public donations go to cover our operational expenses. See Appendix A for a calculation of operational expenses.
Why give to EA Funds?
We think EA Funds is well-positioned to be a significant independent grantmaker for the following reasons.
We are primarily looking for funding to support the Long-Term Future Fund and the EA Infrastructure Fund’s grantmaking.
The Long-Term Future Fund is primarily focused on reducing catastrophic risks from advanced artificial intelligence and biotechnology, as well as building and equipping a community of people focused on safeguarding humanity’s future potential. The EA Infrastructure Fund is focused on increasing the impact of projects that use the principles of effective altruism, in particular amplifying the efforts of people who aim to do an ambitious amount of good from an impartial welfarist and scope-sensitive perspective. We have included some examples of grants each fund has made in the highlighted grants section.
Our Fund Managers
We lean heavily on the experience and judgement of our fund managers. We have around five fund managers on each fund at any given time. [4]Our current fund managers include:
Guest Fund Managers
Daniel Eth (LTFF): Daniel's research has spanned several areas relevant to longtermism, and he's currently focused primarily on AI governance. He was previously a Senior Research Scholar at the Future of Humanity Institute. He is currently self-employed.
Lauro Langosco (LTFF): Lauro is a PhD student with David Krueger at the University of Cambridge. His work focused broadly on AI Safety, in particular on demonstrations of alignment failures, forecasting AI capabilities, and scalable AI oversight.
Lawrence Chan (LTFF): Lawrence is a researcher at ARC Evals, working on safety standards for AI companies. Before joining ARC Evals, he worked at Redwood Research and as a PhD Student at the Center for Human Compatible AI at UC Berkeley.
Thomas Larsen (LTFF): Thomas was an alignment research contractor at MIRI, and he is currently running the Center for AI Policy, where he works on AI governance research and advocacy.
Clara Collier (LTFF): Clara is the managing editor of Asterisk, a quarterly journal focused on communicating insights on important issues. Before, she worked as an independent researcher on existential risks. She has a Masters in Modern Languages from Oxford.
Michael Aird (EAIF): Michael Aird is a Senior Research Manager in Rethink Priorities' AI Governance and Strategy team. He also serves as an advisor to organizations such as Training for Good and is an affiliate of the Centre for the Governance of AI. His prior work includes positions at the Center on Long-Term Risk and the Future of Humanity Institute.
Huw Thomas (EAIF): Huw is currently working part-time on various projects (including a contractor role at 80,000 hours). Prior to this, he worked as a media associate at Longview Philanthropy, a groups associate at the Centre for Effective Altruism and was a recipient of the CEA Community Building Grant for his work at Effective Altruism Oxford.
You can find a full list of our fund managers here[5]
If you have more questions, feel free to leave a comment here. Caleb Parikh and the fund managers are also happy to talk to donors potentially willing to give >$30k. Linch Zhang, in particular, has volunteered himself to talk about the LTFF.
Highlighted Grants
EA Funds has identified a variety of high-impact projects, at least some of which we think are unlikely to have been funded elsewhere. (However, for any specific grant listed below, we think there’s a fairly high probability they’d otherwise be funded in some form or another; figuring out counterfactuals is often hard).
From the Long-Term Future Fund:
From the EA Infrastructure Fund:
See a complete list of our public grants at this link. You can also read the most recent payout report by LTFF here.
Planned actions over the next six months
To achieve our goals of empowering thoughtful people to pursue impactful projects, we'll attempt to do the following:
Potential negatives to be aware of
Here are some reasons you might not want to donate to EA Funds:
Potential downside risks of LTFF or EAIF
Note that we consider these issues to be structural and do not realistically expect resolutions to these downside risks going forwards.
Areas of improvement for the LTFF and EAIF
Historically, we’ve had the following (hopefully fixable) problems:
For more, you can read Asya’s reflections on her time as chair of LTFF.
EAIF vs LTFF
Some donors are interested in giving to both the EAIF and LTFF and would like advice on which fund is a better fit for them.
We think that the EAIF is a better fit for donors who:
We think that the LTFF is a better fit for donors who are:
Closing thoughts
This post was written by Caleb Parikh and Linch Zhang. Feel free to ask questions or give us feedback in the comments below.
If you are interested in donating to either LTFF or EAIF, you can do so here.
Appendix A: Operational expenses calculations and transparency.
In the last year, EA Funds has dispersed $35M and spent ~700k in operational expenses. The vast majority of the operational expenses were spent on LTFF and EAIF, as the global health and development fund and animal welfare fund are operationally much simpler.
Historically, ~60-80% of the operational expenses are paid to EV Ops, for grant disbursement, tech, legal, other ops, etc.
The remaining 20-40% is used for:
I (Linch) ballparked the expected annual expenditures going forwards (assuming no cutbacks) to be ~800k annually. I estimated the increase due to a) inflation and b) us wanting to take on more projects, with some savings from us slowing down the rate of dispersals a little. But this estimate is not exact.
Since LTFF and EAIF incur the highest expenses, I suggest donors to each should contribute around $400k yearly, or $200k every six months.
As for where we might cut or increase spending:
I think my own hours at EAF are somewhat contingent on operational funding. In the last month, I’ve been spending more than half of my working hours on EA Funds (EA Funds is buying out my time at RP), mostly helping Caleb with communications and strategic direction. I will like to continue doing this until I believe EA Funds is in a good state (or we decide to discontinue or sunset projects I’m involved in). Obviously whether there is enough budget to pay for my time is a crux for whether I should continue here.
Assuming we can pay for my time, other plausible uses of marginal operational funding include: a) whether we pay external investigators for extensive or just shallow retroactive evaluations, b) whether we attempt to launch new programs, c) whether the new infosec, AI safety project, etc websites have professional designers, etc. My personal view is that marginal spending on EA Funds expenses is quite impactful relative to other possible donations, but I understand if donors do not feel the same way and will prefer a higher percentage of donations go directly to our grantees (currently it’s 100% but proposed changes may move this to ~ 94-97%).