Nice, thanks for collating these!
Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological
and somewhat older:
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022. https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like.
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021. https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against.
Thanks! I've included Erik Hoel's and lc's essays.
Your article doesn't actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation —
This analysis does not show that restraint for AGI is currently desirable; that it would be easy; that it would be a wise strategy (given its consequences); or that it is an optimal or competitive approach relative to other available AI governance strategies.
But if you've written anything which explicitly endorses AI restraint then I'll include that in the list.
Last updated: April 14th 2023.
by Future of Life Institute
by Eliezer Yudkowsky
by Ian Hogarth
by the Center for Humane Technology
by Sigal Samuel
by Max Tegmark, Lex Fridman
by Lennart Heim, Future of Life Institute
by KatjaGrace
by Cleo Nardo
by Cleo Nardo
by Akash, Olivia Jimenez, Thomas Larsen
by Michael Huang
by Simeon Campos, Henry Papadatos, Charles M
by lc
by Center for AI and Digital Policy
by Erik Hoel
by Eliezer Yudkowsky, Lex Fridman
by Eliezer Yudkowsky, Bankless
by Zach Stein-Perlman
by Akash
About this document
There has been a recent flurry of letters/articles/statements/videos which endorse a slowdown or halt of colossal AI experiments via (e.g.) regulation or coordination. This document aspires to collect all examples into a single list. I'm undecided on how best to order and subdivide the examples, but I'm open to suggestions. Note that I'm also including surveys.
This list is:
Please mention in the comments any examples I've missed so I can add them.
Credit to Zach Stein-Perlman.
Credit to MM Maas.