Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

CHCAI/MIRI research internship in AI safety

5 Post author: RobbBB 13 February 2017 06:34PM

The Center for Human-Compatible AI (CHCAI) and the Machine Intelligence Research Institute (MIRI) are looking for talented, driven, and ambitious technical researchers for a summer research internship.



CHCAI is a research center based at UC Berkeley with PIs including Stuart Russell, Pieter Abbeel and Anca Dragan. CHCAI describes its goal as "to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems".

MIRI is an independent research nonprofit located near the UC Berkeley campus with a mission of helping ensure that smarter-than-human AI has a positive impact on the world.

CHCAI's research focus includes work on inverse reinforcement learning and human-robot cooperation (link), while MIRI's focus areas include task AI and computational reflection (link). Both groups are also interested in theories of (bounded) rationality that may help us develop a deeper understanding of general-purpose AI agents.


To apply:

1. Fill in the form here: https://goo.gl/forms/bDe6xbbKwj1tgDbo1

2. Send an email to beth.m.barnes@gmail.com with the subject line "AI safety internship application", attaching your CV, a piece of technical writing on which you were the primary author, and your research proposal.

The research proposal should be one to two pages in length. It should outline a problem you think you can make progress on over the summer, and some approaches to tackling it that you consider promising. We recommend reading over CHCAI's annotated bibliography and the concrete problems agenda as good sources for open problems in AI safety, if you haven't previously done so. You should target your proposal at a specific research agenda or a specific adviser’s interests. Advisers' interests include:

Andrew Critch (CHCAI, MIRI): anything listed in CHCAI's open technical problems; negotiable reinforcement learning; game theory for agents with transparent source code (e.g., "Program Equilibrium" and "Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents").

• Daniel Filan (CHCAI): the contents of "Foundational Problems," "Corrigibility," "Preference Inference," and "Reward Engineering" in CHCAI's open technical problems list.

• Dylan Hadfield-Menell (CHCAI): application of game-theoretic analysis to models of AI safety problems (specifically by people who come from a theoretical economics background); formulating and analyzing AI safety problems as CIRL games; the relationships between AI safety and principal-agent models / theories of incomplete contracting; reliability engineering in machine learning; questions about fairness.

Jessica Taylor, Scott Garrabrant, and Patrick LaVictoire (MIRI): open problems described in MIRI's agent foundations and alignment for advanced ML systems research agendas.

This application does not bind you to work on your submitted proposal. Its purpose is to demonstrate your ability to make concrete suggestions for how to make progress on a given research problem.


Who we're looking for:

This is a new and somewhat experimental program. You’ll need to be self-directed, and you'll need to have enough knowledge to get started tackling the problems. The supervisors can give you guidance on research, but they aren’t going to be teaching you the material. However, if you’re deeply motivated by research, this should be a fantastic experience. Successful applicants will demonstrate examples of technical writing, motivation and aptitude for research, and produce a concrete research proposal. We expect most successful applicants will either:

• have or be pursuing a PhD closely related to AI safety;

• have or be pursuing a PhD in an unrelated field, but currently pivoting to AI safety, with evidence of sufficient knowledge and motivation for AI safety research; or

• be an exceptional undergraduate or masters-level student with concrete evidence of research ability (e.g., publications or projects) in an area closely related to AI safety.



Program dates are flexible, and may vary from individual to individual. However, our assumption is that most people will come for twelve weeks, starting in early June. The program will take place in the San Francisco Bay Area. Basic living expenses will be covered. We can’t guarantee that housing will be all arranged for you, but we can provide assistance in finding housing if needed. Interns who are not US citizens will most likely need to apply for J-1 intern visas. Once you have been accepted to the program, we can help you with the required documentation.



The deadline for applications is the March 1. Applicants should hear back about decisions by March 20.

Comments (0)

There doesn't seem to be anything here.