Funding Case: AI Safety Camp 11
This is a linkpost to our funding case on Manifund. Project summary AI Safety Camp has a seven-year track record of enabling participants to try their fit, find careers and start new orgs in AI Safety. We host up-and-coming researchers outside the Bay Area and London hubs. If this fundraiser passes… * $15k, we won’t run a full program, but can facilitate 10 projects. * $40k, we can organise the 11th edition, for 25 projects. * $70k, we can pay a third organiser, for 35 projects. * $300k, we can cover stipends for 40 projects. What are this project's goals? How will you achieve them? > By all accounts they are the gold standard for this type of thing. Everyone says they are great, I am generally a fan of the format, I buy that this can punch way above its weight or cost. If I was going to back [a talent funnel], I’d start here. > — Zvi Mowshowitz (Nov 2024) > My current work (AI Standards Lab) was originally a AISC project. Without it, I'd guess I would be full-time employed in the field at least 1 year later, and the EU standards currently close to completion would be a lot weaker. High impact/high neglectedness opportunities are fairly well positioned to be kickstarted with volunteer effort in AISC, even if some projects will fail (hits based). After some initial results during AISC, they can be funded more easily. > — Ariel Gil (Jan 2025) AI Safety Camp is part incubator and part talent funnel: * an incubator in that we help experienced researchers form new collaborations that can last beyond a single edition. Alumni went on to found 10 organisations. * a talent funnel in that we help talented newcomers to learn by doing – by working on a concrete project in the field. Alumni went on to take 43 jobs in AI Safety. The Incubator case is that AISC seeds epistemically diverse initiatives. Edition 10 supports new alignment directions, control limits research, neglected legal regulations, and 'slow down AI' advocacy. Funders who are uncertai