(Update: we are no longer accepting applications for interns.)

In addition to hiring full-time researchers, ARC is also hiring for 1-3 month internships. We’re open to interns for summer 2022 in particular, but if a different time works for you that’s fine too.

Internships are appropriate for undergraduates and graduate students, or for anyone considering a career change.

To apply for an internship, you can use this application and check the “internship” box. If you submitted an application previously there’s no need to resubmit; we’ll clarify if there is ambiguity.

Salary for interns is $15k/month. We encourage interns to work from our Berkeley office especially at the start of their internship.

(The rest of this post is copied from our previous hiring post.)

What is ARC?

ARC is a non-profit organization focused on theoretical research to align future machine learning systems with human interests. We are aiming to develop alignment strategies that would continue to work regardless of how far we scaled up ML or how ML models end up working internally.

Probably the best way to understand our work is to read Eliciting Latent Knowledge, a report describing some recent and upcoming research, which illustrates our general methodology.

We currently have 2 research staff (Paul Christiano and Mark Xu). We’re aiming to hire another 1-2 researchers in early 2022. ARC is a new organization and is hoping to grow significantly over the next few years, so early hires will play a key role in helping define and scale up our research.

Who should apply?

Most of all, you should send in an application if you feel excited about proposing the kinds of algorithms and counterexamples described in our report on ELK.

We’re open to anyone who is excited about working on alignment even if you don't yet have any research background (or your research is in another field). You may be an especially good fit if you:

  • Are creative and generative (e.g. you may already have some ideas for potential strategies or counterexamples for ELK, even if they don't work).
  • Have experience designing algorithms, proving theorems, or formalizing concepts.
  • Have a broad base of knowledge in mathematics and computer science (we often draw test cases and counterexamples from these fields).
  • Have thought a lot about the AI alignment problem, especially in the limit of very powerful AI systems.

Hiring will be a priority for us in early 2022 and we don't mind reading a lot of applications, so please err on the side of sending in an application even if you’re not sure you’ll be a fit!

New Comment