This is a list of past and present research that could inform slowing AI. It is roughly sorted in descending order of priority, between and within subsections. I've read about half of these; I don't necessarily endorse them. Please have a low bar to suggest additions, replacements, rearrangements, etc.

Slowing AI

There is little research focused on whether or how to slow AI progress.[1]

Particular (classes of) interventions & affordances

Making AI risk legible to AI labs and the ML research community

Transparency & coordination

Relates to "Racing & coordination." Roughly, that subsection is about world-modeling and threat-modeling and this subsection is about solutions and interventions.

See generally Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims (Brundage et al. 2020).

Compute governance

Standards

Regulation

Publication practices

Differentially advancing safer paths

Actors' levers

There are some good lists and analyses, but not focused on slowing AI.

Racing & coordination

Racing for powerful AIhow actors that develop AI act, and how actors could coordinate to decrease risk. Understanding how labs act and racing for powerful AI seem to be wide open problems, as does giving an account of the culture of progress and publishing in AI labs and the ML research community.

I've read less than half of these; possibly many of them are off-point or bad.

Technological restraint

Other

People[2]

There are no experts on slowing AI, but there are people who it might be helpful to talk to, including (disclaimer: very non-exhaustive) (disclaimer: I have not talked to all of these people):

  • Zach Stein-Perlman
  • Lukas Gloor
  • Jeffrey Ladish
  • Matthijs Maas
    • Specifically on technological restraint
  • Akash Wasil
    • Especially on publication practices or educating the ML community about AI risk
  • Michael Aird
  • Vael Gates
    • Specifically on educating the ML community about AI risk; many other people might be useful to talk to about this, including Shakeel Hashim, Alex Lintz, and Kelsey Piper
  • Katja Grace
  • Lennart Heim on hardware policy
  • Onni Aarne on hardware
  • Probably some other authors of research listed above
  1. ^

    There is also little research on particular relevant considerations, like how multipolarity among labs relates to x-risk and to slowing AI, or how AI misuse x-risk and non-AI x-risk relate to slowing AI.

  2. ^

    I expect there is a small selection bias where the people who think and write about slowing AI are disposed to be relatively optimistic about it.

New Comment
3 comments, sorted by Click to highlight new comments since:

Thank you for compiling this list. This is useful, and I expect to point people to it in the future. The best thing, IMO, is that it is not verbose and not dripping with personal takes on the problem; I would like to see more compilations of topics like this to give other people a leg up when they aspire to venture into a field.

A potential addition is Dan Hendryck's PAIS agenda, in which he advocates for ML research that promotes alignment without also causing advances in capabilities. This effectively also slows AI (capabilities) development, and I am quite partial to this idea.

Yay.

Many other collections / reading lists exist, and I'm aware of many public and private ones in AI strategy, so feel free to DM me strategy/governance/forecasting topics you'd want to see collections on.

I haven't updated this post much since April but I'll update it soon and plan to add PAIS, thanks.

Thanks for writing this!

In addition to regulatory approaches to slowing down AI development, I think there is room for "cultural" interventions within academic and professional communities that discourage risky AI research:

https://www.lesswrong.com/posts/ZqWzFDmvMZnHQZYqz/massive-scaling-should-be-frowned-upon