This looks like a guide for [working in a company that already has a research agenda, and doing engineering work for them based on what they ask for] and not for [trying to come up with a new research direction that is better than what everyone else is doing], right?
Mostly, yes, that's right. The exception is in Level 7: Original Experiments which suggests several resources for forming an inside view and coming up with new research directions, but I think many people could get hired as research engineers before doing that stuff (though maybe they do that stuff while working as a research engineer and that leads them to come up with new better research directions).
This is a great guide - thank you. However, in my experience as someone completely new to the field, 100-200 hours on each level is very optimistic. I've easily spent double/triple the duration on the first two levels and not get to a comfortable level.
Thanks, yeah that's a pretty fair sentiment. I've changed the wording to "at least 100-200 hours," but I guess the idea was more to present a very efficient way of learning things that maybe 80/20's some of the material. This does mean there will be more to learn—rather than these being strictly linear progression levels, I imagine someone continuously coming back to AI safety readings and software/ML engineering skills often throughout their journey, as it sounds like you have.
I'd have Level 1 (AI Safety Fundamentals) be Level 4 or 5, probably. I'm pretty happy to hire engineers who have good ML skills but are rusty on "AI safety fundamentals"; I think they can pick that up on the job much more easily than the coding / ML skills.
Interesting, that is the level that feels most like it doesn't have a solid place in a linear progression of skills. I wrote "Level 1 kind of happens all the time" to try to reflect this, but I ultimately decided to put it at the start because I feel that for people just starting out it can be a good way to test their fit for AI safety broadly (do they buy the arguments?) and decide whether they want to go down a more theoretical or empirical path. I just added some language to Level 1 to clarify this.
My understanding is that Level 1 is supposed to happen in parallel with the others, but this might be clearer by separating it outside the numbered system entirely, like Level 0 or Level -1 or something.
As for why it's included as the first step, I think the reasoning is that if someone knows nothing about AI safety, their first question is going to be "Should I actually care about this problem", and to answer that question they do need to do a little bit of AI safety specific reading.
I agree this could be made clearer - one of the first bits of advice I got when I started asking around about this stuff was "Technical ML ability is harder and takes longer to learn than AI safety knowledge does, so spend most of your time on this as opposed to AI safety" and I remember this being a very unintuitive insight.
I recognize many of the institutions you mentioned such as Nvidia and MIT. How confident are you that the more obscure ones like Hugging Face are trustworthy?
I'm not sure what you particularly mean by trustworthy. If you mean a place with good attitudes and practices towards existential AI safety, then I'm not sure HF has demonstrated that.
If you mean a company I can instrumentally trust to build and host tools that make it easy to work with large transformer models, then yes, it seems like HF pretty much has a monopoly on that for the moment, and it's worth using their tools for a lot of empirical AI safety research.
Summary: A level-based guide for independently up-skilling in AI Safety Research Engineering that aims to give concrete objectives, goals, and resources to help anyone go from zero to hero.
Cross-posted to the EA Forum. View a pretty Google Docs version here.
Introduction
I think great career guides are really useful for guiding and structuring the learning journey of people new to a technical field like AI Safety. I also like role-playing games. Here’s my attempt to use levelling frameworks and break up one possible path from zero to hero in Research Engineering for AI Safety (e.g. jobs with the “Research Engineer” title) through objectives, concrete goals, and resources. I hope this kind of framework makes it easier to see where one is on this journey, how far they have to go, and some options to get there.
I’m mostly making this to sort out my own thoughts about my career development and how I’ll support other students through Stanford AI Alignment, but hopefully, this is also useful to others! Note that I assume some interest in AI Safety Research Engineering—this guide is about how to up-skill in Research Engineering, not why (though working through it should be a great way to test your fit). Also note that there isn’t much abstract advice in this guide (see the end for links to guides with advice), and the goal is more to lay out concrete steps you can take to improve.
For each level, I describe the general capabilities of someone at the end of that level, some object-level goals to measure that capability, and some resources to choose from that would help get there. The categories of resources within a level are listed in the order you should progress, and resources within a category are roughly ordered by quality. There’s some redundancy, so I would recommend picking and choosing between the resources rather than doing all of them. Also, if you are a student and your university has a good class on one of the below topics, consider taking that instead of one of the online courses I listed.
As a very rough estimate, I think each level should take at least 100-200 hours of focused work, for a total of 700-1400 hours. At 10 hours/week (quarter-time), that comes to around 16-32 months of study but can definitely be shorter (e.g. if you already have some experience) or longer (if you dive more deeply into some topics)! I think each level is about evenly split between time spent reading/watching and time spent building/testing, with more reading earlier on and more building later.
Confidence: mid-to-high. I am not yet an AI Safety Research Engineer (but I plan to be)—this is mostly a distillation of what I’ve read from other career guides (linked at the end) and talked about with people working on AI Safety. I definitely haven’t done all these things, just seen them recommended. I don’t expect this to be the “perfect” way to prepare for a career in AI Safety Research Engineering, but I do think it’s a very solid way.
Level 1: AI Safety Fundamentals
Objective
You are familiar with the basic arguments for existential risks due to advanced AI, models for forecasting AI advancements, and some of the past and current research directions within AI alignment/safety. You have an opinion on how much you buy these arguments and whether you want to keep exploring AI Safety Research Engineering.
Why this first? Exposing yourself to these fundamental arguments and ideas is useful for testing your fit for AI Safety generally, but that isn’t to say you should “finish” this Level first and move on. Rather, you should be coming back to these readings and keeping up to date with the latest work in AI Safety throughout your learning journey. It’s okay if you don’t understand everything on your first try—Level 1 kind of happens all the time.
Goals
Resources
Level 2: Software Engineering
Objective
You can program in Python at the level of an introductory university course. You also know some other general software engineering tools/skills like the command line, Git/GitHub, documentation, and unit testing.
Why Python? Modern Machine Learning work, and thus AI Safety work, is almost entirely written in Python. Python is also an easier language for beginners to pick up, and there are plenty of resources for learning it.
Goals
Resources
Level 3: Machine Learning
Objective
You have the mathematical context necessary for understanding Machine Learning (ML). You know the differences between supervised and unsupervised learning and between classification and regression. You understand common models like linear regression, logistic regression, neural networks, decision trees, and clustering, and you can code some of them in a library like PyTorch or JAX. You grasp core ML concepts like loss functions, regularization, bias/variance, optimizers, metrics, and error analysis.
Why so much math? Machine learning at its core is basically applied statistics and multivariable calculus. It used to be that you needed to know this kind of math really well, but now with techniques like automatic differentiation, you can train neural networks without knowing much of what’s happening under the hood. These foundational resources are included for completeness, but you can probably spend a lot less time on math (e.g. the first few sections of each course) depending on what kind of engineering work you intend to do. You might want to come back and improve you math skills for understanding certain work in Levels 6-7, though, and if you find this math really interesting, you might be a good fit for theoretical AI alignment research.
Goals
Resources
Level 4: Deep Learning
Objective
You’ve dived deeper into Deep Learning (DL) through the lens of at least one subfield such as Natural Language Processing (NLP), Computer Vision (CV), or Reinforcement Learning (RL). You now have a better understanding of ML fundamentals, and you’ve reimplemented some core ML algorithms “from scratch.” You’ve started to build a portfolio of DL projects you can show others.
Goals
Resources
Level 5: Understanding Transformers
Objective
You have a principled understanding of self-attention, cross-attention, and the general transformer architecture along with some of its variants. You are able to write a transformer like BERT or GPT-2 “from scratch” in PyTorch or JAX (a skill I believe Redwood Research looks for), and you can use resources like 🤗 Transformers to work with pre-trained transformer models. Through experimenting with deployed transformer models, you have a decent sense of what transformer-based language and vision models can and cannot do.
Why transformers? The transformer architecture is currently the foundation for State of the Art (SOTA) results on most deep learning benchmarks, and it doesn’t look like it’s going away soon. Much of the newest ML research involves transformers, so AI Safety organizations focused on prosaic AI alignment or conducting research on current models practically all focus on transformers for their research.
Goals
Resources
Level 6: Reimplementing Papers
Objective
You can read a recently published AI research paper and efficiently implement the core technique they present to validate their results or build upon their research. You also have a good sense of the latest ML/DL/AI Safety research. You’re pretty damn employable now—if you haven’t started applying for Research Engineering jobs/internships, consider getting on that!
Why papers? I talked with research scientists or engineers from most of the empirical AI Safety organizations (i.e. Redwood Research, Anthropic, Conjecture, Ought, CAIS, Encultured AI, DeepMind), and they all said that being able to read a recent ML/AI research paper and efficiently implement it is both a signal of a strong engineering candidate and a good way to build useful skills for actual AI Safety work.
Goals
Resources
Level 7: Original Experiments
Objective
You can now efficiently grasp the results of AI research papers and come up with novel research questions to ask as well as empirical ways to answer them. You might already have a job at an AI Safety organization and have picked up these skills as you got more Research Engineering experience. If you can generate and test these original experiments particularly well, you might consider Research Scientist roles, too. You might also want to apply for AI residencies or Ph.D. programs to explore some research directions further in a more structured academic setting.
Goals
Resources
Epilogue: Risks
Embarking on this quest brings with it a few risks. By keeping these in mind, you may be less likely to fail in these ways:
Capabilities
Difficulty
Early Over-Specialization
Late Over-Generalization
Sources
Here are some of the other great career guides and resources I used in the making of this. Most of the guides here also have good general advice that would be useful to read even if you don’t do the other things they suggest. Consider checking them out!
Many thanks to Jakub Nowak, Peter Chatain, Thomas Woodside, Erik Jenner, Jacy Reese Anthis, and Konstantin Pilz for review and suggestions!