AI has enormous beneficial potential if it is governed well. However, in line with a growing contingent of AI (and other) experts from academia, industry, government, and civil society, we also think that AI systems could soon (e.g. in the next 15 years) cause catastrophic harm. For example, this could happen if malicious human actors deliberately misuse advanced AI systems, or if we lose control of future powerful systems designed to take autonomous actions.[1]

To improve the odds that humanity successfully navigates these risks, we are soliciting short expressions of interest (EOIs) for funding for work across six subject areas, described here.

Strong applications might be funded by Good Ventures (Open Philanthropy’s partner organization), or by any of >20 (and growing) other philanthropists who have told us they are concerned about these risks and are interested to hear about grant opportunities we recommend.[2] (You can indicate in your application whether we have permission to share your materials with other potential funders.)

Click here to read the full RFP.

As this is a new initiative, we are uncertain about the volume of interest we will receive. Our goal is to keep this form open indefinitely; however, we may need to temporarily pause accepting EOIs if we lack the staff capacity to properly evaluate them. We will post any updates or changes to the application process on this page.

Anyone is eligible to apply, including those working in academia, nonprofits, industry, or independently.[3] We will evaluate EOIs on a rolling basis. See below for more details.

If you have any questions, please email us. If you have any feedback about this page or program, please let us know (anonymously, if you want) via this short feedback form.

New Comment