If you're interested in technical AI safety, it's hard to know where to start. Lots of people have made courses and reading lists to help simplify things. 

We made a spreadsheet of such resources for learning about AI safety so you can easily see what's available. It was for internal purposes here at Nonlinear, but we thought it might be helpful to those interested in becoming safety researchers. 

Please let us know if you notice anything that we’re missing or that we need to update. This was made about a year ago, but I try to keep it up to date.

Highlights

There are a lot of courses and reading lists out there. If you’re new to the field, out of the ones we investigated, we recommend Richard Ngo’s curriculum of the AGI safety fundamentals program. It is a good mix of shorter, more structured, and more broad than most alternatives. You can register interest for their program when the next round starts or simply read through the reading list on your own.

The new one by Dan Hendrycks also seems great. Structured much more like a classic MOOC and since there are no co-horts, you can just start whenever you'd like. 

New Comment
3 comments, sorted by Click to highlight new comments since:

The new one by Dan Hendrycks also seems great.

The link here seems to point to the wrong address (I want in but the link doesn't work).

Link is course.mlsafety.org.

Woops. Thanks for pointing that out. Updated it