Many more are engaged in AI Safety in other ways, eg. as PhD or independent researcher. These are just the positions we know about. We currently have not done a comprehensive survey.
Worth mentioning that most of the Cyborgism community founders came out of or did related projects in AISC beforehand.
I participated in the previous edition of AISC and found it very valuable to my involvement in AI Safety. I acquired knowledge (on standards and the standards process), got experience, contacts. I appreciate how much coordination AISC enables, with groups forming, which enable many to have their first hands on experience and step up their involvement.
Thank you for sharing, Jonathan.
Welcoming any comments here (including things that went less well, so we can do better next time!).
Strong upvoted. I was a participant of AISC8 in the team that went on to launch AI Standards Lab, which I think counterfactually would not be launched if not for AISC.
Project summary
AI Safety Camp is a program with a 5-year track record of enabling people to find careers in AI Safety.
We support up-and-coming researchers outside the Bay Area and London hubs.
We are out of funding. To make the 10th edition happen, fund our stipends and salaries.
What are this project's goals and how will you achieve them?
AI Safety Camp is a program for inquiring how to work on ensuring future AI is safe, and try concretely working on that in a team.
For the 9th edition of AI Safety Camp we opened applications for 29 projects.
We are first to host a special area to support “Pause AI” work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition.
We are excited about our new research lead format, since it combines:
How will this funding be used?
We are fundraising to pay for:
Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation.
Last June, we had to freeze a year's worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers.
AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends – but nothing for salaries, and nothing for future AISCs.
If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs.
By default we’ll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want.
Potential budgets for various versions of AISC
These are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we’ll do something in between.
Virtual AISC - Budget version
In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.
Salaries are calculated based on $7K per person per month.
Virtual AISC - Normal version
For the non-budget version, we have one more staff and more paid hours per person, which means we can provide more support all-round.
Stipends estimate based on: $185K = $1.5K/research lead *40 + $1K/team member * 120
Number of research leads (40) and team members (120) are guesses based on how much we think AISC will grow.
Who is on your team and what's your track record on similar projects?
We have run AI Safety Camp over five years, covering 8 editions, 74 teams, and 251 participants.
We iterated a lot, based on participant feedback. We converged on a research lead format we are excited about. We will carefully scale this format with your support.
As researchers ourselves, we can meet potential research leads where they are at. We can provide useful guidance and feedback in almost every area of AI Safety research.
We are particularly well-positioned to support epistemically diverse bets.
Organisers
Remmelt – coordinator of "do not build uncontrollable AI"
Pause AI, creative professionals, anti-tech-solutionists, product safety experts, and climate change researchers.
Linda - coordinator of "everything else"
Track record
AI Safety Camp is primarily a learning-by-doing training program. People get to try a role and explore directions in AI safety, by collaborating on a concrete project.
Multiple alumni have told us that AI Safety Camp was how they got started in AI Safety.
AISC topped the ‘average usefulness’ list in Daniel Filan’s survey.
Papers that came out of the camp include:
Projects started at AI Safety Camp went on to receive a total of $613K in grants:
$83K from SFF, $83K from SFF
Organizations launched out of camp conversations include:
Alumni went on to take positions at:
These are just the positions we know about. Many more are engaged in AI Safety in other ways, eg. as PhD or independent researcher.
Update: Both of us now consider positions at OpenAI net negative and we are seriously concerned about positions at other AGI labs.
For statistics of previous editions, see here. We also recently commissioned Arb Research to run alumni surveys and interviews to carefully evaluate AI Safety Camp's impact.
What are the most likely causes and outcomes if this project fails? (premortem)
His guess, he replied, was that he was not currently super interested in most of the projects we found RLs for, and not super interested in the "do not build uncontrollable AI" area.
What other funding are you or your project getting?
No other funding sources.