Yes, this is my first post in almost a year. I’m no longer prioritizing this blog, but I will still occasionally post something.

I wrote ~2 years ago that it was hard to point to concrete opportunities to help the most important century go well. That’s changing.

There are a good number of jobs available now that are both really promising opportunities to help (in my opinion) and are suitable for people without a lot of pre-existing knowledge of AI risk (or even AI). The jobs are demanding, but unlike many of the job openings that existed a couple of years ago, they are at well-developed organizations and involve relatively clear goals.

So if you’re someone who wants to help, but has been waiting for the right moment, this might be it. (Or not! I’ll probably keep making posts like this as the set of opportunities gets wider.)

Here are the jobs that best fit this description right now, as far as I can tell. The rest of this post will give a bit more detail on how these jobs can help, what skills they require and why these are the ones I listed.

Organization Location Jobs Link
UK AI Safety Institute London (remote work possible within the UK) Engineering and frontend roles, cybersecurity roles Here
AAAS, Horizon Institute for Public Service, Tech Congress Washington, DC Fellowships serving as entry points into US policy roles Here
AI companies: Google DeepMind, OpenAI, Anthropic1 San Francisco and London (with some other offices and remote work options) Preparedness/Responsible Scaling roles; alignment research roles Here, here, here, here
Model Evaluation and Threat Research (METR) (fewer roles available) Berkeley (with remote work options) Engineering and data roles Here

Software engineering and development (and related areas) seem especially valuable right now, so think about whether you know folks with those skills who might be interested!

How these help

A lot of these jobs (and the ones I know the most about) would be contributing toward a possible global standards regime for AI: AI systems should be subject to testing to see whether they present major risks, and training/deploying AI should stopped (e.g., by regulation) when it can’t be done safely.

The basic hope is:

  1. Teams will develop “evals”: tests of what AIs are capable of, particularly with respect to possible risks. For example, one eval might be prompting an AI to give a detailed description of how to build a bioweapon; the more detailed and accurate its response, the more risk the AI poses (while also possibly having more potential benefits as well, by virtue of being generally more knowledgeable/capable).
  2. It will become common (through regulation, voluntary action by companies, industry standards, etc.) for cutting-edge AI systems to be subject to evals for dangerous capabilities.
  3. When evals reveal risk, they will trigger required mitigations. For example:
    1. An AI capable of bioweapons development should be (a) deployed in such a way that people can’t use it for that (including by “jailbreaking” it), and (b) kept under good security to stop would-be terrorists from circumventing the restrictions.
    2. AIs with stronger and more dangerous capabilities might require very challenging mitigations, possibly beyond what anyone knows how to do today (for example, rigorous demonstrations that an AI won’t have dangerous unintended aims, even if this sort of thing is hard to measure).
  4. Ideally, we’d eventually build a robust international governance regime (comparisons have been made to nuclear non-proliferation regimes) that reliably enforces rules like these, while safe and beneficial AI goes forward. But my view is that even dramatically weaker setups can still help a lot (some more on this theme here.)

The jobs above include designing and running evals (#1-#2); designing and implementing company policies2 that can serve as early versions of #3 (I don’t think company self-enforcement is good enough in the long run, but I do think it’s a good start for iterating on the policies and practices); and national and international policy dialogues that could work toward #4.

I think of international standards like these as a central piece of the picture for helping the most important century go better, and it’s the piece I’ve been focused on over the last year. Some benefits of a good regime could include:

  • Catching early warnings of dangers from advanced AI, to build consensus worldwide around how to handle them. (Throughout this section, when I talk about “dangers” and “safety” I mean them very broadly - not just alignment risk but also concerns about power imbalances, new kinds of minds, etc.)
  • Delaying AI development if (and wherever) it can’t be done safely.
  • Changing incentives for AI developers worldwide: progress on protective measures (safety research, information security, and more) would become necessary to deploy powerful systems, so these things could become top priorities rather than side projects.
  • Having a framework in which, if the first highly advanced AIs are safe, it’s possible to (partly with AI help) prevent reckless or malicious actors from eventually building dangerous ones (more).

The organizations I’ve listed also offer a number of other jobs that involve working on AI safety research, working out other aspects of AI regulation, etc.

What skills are needed

A lot of these jobs are related to software engineering (and other software development, e.g. frontend and UX development). My sense is that this doesn’t have to be AI-specific experience - anyone with a strong background working on or with software engineering and software development might be a fit with several of these roles.

In addition, all of these are fairly large-scale projects and organizations that might have openings for generalists as well. I’d expect project management and research to be particularly useful background skills.

This isn’t everyone, but the set of potential candidates seems wider (and the jobs for such people more promising) than in the past!

So

If you might be a good candidate, please apply!

If not, something you can do is take 10 minutes to think of people who might be, and shoot them a note.

Thank you!

Appendix: how I decided which jobs to list

For this post, I wanted to focus on jobs suitable for people who aren’t already steeped in the world of AI and AI risk, and that would present smoother experiences than trying to start a new organization or join one that’s very new, small, and/or fluid in its goals. As one proxy for this, I looked for organizations with strong AI focus listing at least 10 open roles on the 80,000 Hours job board (which generally tries to list roles focused on AI risk rather than any and all AI roles). The set of jobs I found this way was a pretty good match for the set I would’ve brainstormed on my own based on the criteria.3 I added METR even though it’s smaller than the other orgs, because I advise METR regularly enough to know it fairly well, and felt comfortable advertising the roles.

I’m sure I didn’t choose these perfectly, and this post isn’t meant to be a remotely exhaustive list - just a plug for some jobs I understand especially well and think could be good opportunities for readers.

Footnotes


  1. I am married to the President of Anthropic and have a financial interest in both Anthropic and OpenAI via my spouse. I listed these companies in order of the number of employees (descending) according to Google. To be clear, I am not involved in hiring at any of the listed organizations and can’t help anyone get a job there; I’m just broadcasting the opportunities. 

  2. See Anthropic’s responsible scaling policy and OpenAI’s Preparedness Framework

  3. There was one organization (aside from METR) that didn’t list more than 5 roles, but that I thought belonged anyway, so I just added it. 

New Comment