Title: Research Assistant for AI Standards Development Ideal start date: December 2022 Hours: 20-40 hours/week Compensation: $30/hour to $50/hour, depending on experience and qualifications Work location: Remote Reports to: Tony Barrett, BERI Senior Policy Analyst
For best consideration, please apply by Monday November 7th, 2022, 5pm Eastern Time. Applications received after that date may also be considered, but only after applications that met the deadline.
Responsibilities
Supporting work planned by Tony Barrett and UC Berkeley colleagues to develop an AI-standards “profile” with best practices for developers of cutting-edge, increasingly general purpose AI, building on the ideas in Section 4 of the paper by Barrett and colleagues, “Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks”. The profile guidance will be primarily for use by developers of such AI systems, in conjunction with the NIST AI Risk Management Framework (AI RMF) and/or the AI risk management standard ISO/IEC 23894. Our goal is to help set norms for safety-related practices across regulatory regimes, reducing chances that developers of highly advanced AI systems (including proto-AGI) would have to compromise on safety, security, ethics or related qualities of AI systems in order to be competitive.
Tasks will include research and analysis of technical or policy issues in AI safety standards or related topics. The goal is to help our team to address key AI technical issues with actionable guidance for AI developers, in ways that improve the overall quality of our profile guidance documents.
Technical research assistance tasks may include:
Literature searches on technical methods for safety or security of machine learning models
Gap analysis to check that our draft guidance would address key technical issues in AI safety, security or other areas
Policy research assistance tasks may include:
Identifying and analyzing related standards or regulations
Mapping specific sections of our draft guidance to specific parts of related standards or regulations
Checking that our draft guidance would meet the intent and requirements of related standards or regulations
We currently have funding for approximately one year of work, but we have potential to obtain additional funding to renew or expand this work.
Qualification Criteria
The most competitive candidates will meet the below criteria.
Education or experience in one or more of the following:
AI development techniques and procedures used at leading AI labs developing increasingly general-purpose AI;
Technical AI safety concepts, techniques and literature;
Industry standards and best practices for AI or other software, and compliance with standards language;
Public policy or regulations (especially in the United States) for AI or other software
Ability to research and analyze technical or policy issues in AI safety standards or related topics
Ability to track and complete multiple tasks to meet deadlines with little or no supervision
Good English communication skills, both written and verbal, including editing text to improve understandability
Availability for video calls (e.g. via Zoom) for 30 minutes three times a week, at some point between 9am and 5pm Eastern Time (it’s not necessary to be available that whole time, and otherwise you can choose your own working hours)
We will likely hire two people, each on a part-time basis, one with a technical background and one with a policy background. However, we are open to having one person fill both of those roles.
Apply here.
Title: Research Assistant for AI Standards Development
Ideal start date: December 2022
Hours: 20-40 hours/week
Compensation: $30/hour to $50/hour, depending on experience and qualifications
Work location: Remote
Reports to: Tony Barrett, BERI Senior Policy Analyst
For best consideration, please apply by Monday November 7th, 2022, 5pm Eastern Time. Applications received after that date may also be considered, but only after applications that met the deadline.
Responsibilities
Supporting work planned by Tony Barrett and UC Berkeley colleagues to develop an AI-standards “profile” with best practices for developers of cutting-edge, increasingly general purpose AI, building on the ideas in Section 4 of the paper by Barrett and colleagues, “Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks”. The profile guidance will be primarily for use by developers of such AI systems, in conjunction with the NIST AI Risk Management Framework (AI RMF) and/or the AI risk management standard ISO/IEC 23894. Our goal is to help set norms for safety-related practices across regulatory regimes, reducing chances that developers of highly advanced AI systems (including proto-AGI) would have to compromise on safety, security, ethics or related qualities of AI systems in order to be competitive.
Tasks will include research and analysis of technical or policy issues in AI safety standards or related topics. The goal is to help our team to address key AI technical issues with actionable guidance for AI developers, in ways that improve the overall quality of our profile guidance documents.
Technical research assistance tasks may include:
Policy research assistance tasks may include:
We currently have funding for approximately one year of work, but we have potential to obtain additional funding to renew or expand this work.
Qualification Criteria
The most competitive candidates will meet the below criteria.
We will likely hire two people, each on a part-time basis, one with a technical background and one with a policy background. However, we are open to having one person fill both of those roles.
Application Process
Apply here.
For best consideration, please apply by Monday November 7th, 2022, 5pm Eastern Time.
Candidates invited to interview will also be asked to perform a written work test, which we expect to take one to two hours.
More information on BERI's website.