If you’ve read about alignment research and you want to start contributing, the new iteration of the AI Safety Camp is a great opportunity!

It’s a virtual camp from January to May 2022, where you collaborate with other applicants to work (1h / normal workday, 7h / weekend sprint day) on open problems proposed and supervised by mentors like John Wentworth, Beth Barnes, Stuart Armstrong, Daniel Kokotajlo… Around this core of research, the camp also includes talks and discussions about fundamental ideas in the field, how alignment research works, and how and where to get a job/funding.

All in all, the AI Safety Camp is a great opportunity if:

  • You have read enough about alignment that you’re convinced of the importance of the problem
  • You want to do alignment research (whether conceptual or applied), or to collaborate with alignment researchers (doing policy for example)
  • You don’t feel yet like you have enough research taste and grasp of the field to choose your research problems yourself yet

Note that you don’t need advanced maths skills to participate in the camp, as some of the projects don’t require any specific skillset or very unusual ones (evolutionary genetics, history...). If you care about alignment and are in this situation, I encourage you to apply for a project without required skillsets and learn what you need as you go along.

All the details on how to apply are available on the website (including the list of open problems).

New Comment
3 comments, sorted by Click to highlight new comments since:

Seems interesting, I applied. On a logistical note, supplying a pre formatted google sheet for draft answers is a neat innovation. 

This looks very promising. I think I’ll apply.

About the application: the open ended questions prompt with

“> 5 concise lines”

Does this mean “More than 5 concise lines” or does it mean “Put your 5 concise lines here”? Thanks for the clarification.

It means more. :)