A few observations from the last few weeks:
- On March 22, FLI published an open letter calling for a six-month moratorium on frontier AI progress.
- On March 29, Eliezer Yudkowsky published a piece in TIME calling for an indefinite moratorium.
- To our knowledge, none of the top AI organizations (OpenAI, DeepMind, Anthropic) have released a statement responding to these pieces.
We offer a request to AGI organizations: Determine what you think about these requests for an AI pause (possibly with uncertainties acknowledged), write up your beliefs in some form, and publicly announce your position.
We believe statements from labs could improve discourse, coordination, and transparency on this important and timely topic.
Discourse: We believe labs are well-positioned to contribute to dialogue around whether (or how) to slow AI progress, making it more likely for society to reach true and useful positions.
Coordination: Statements from labs could make coordination more likely. For example, lab A could say “we would support a pause under X conditions with Y implementation details”. Alternatively, lab B could say “we would be willing to pause if lab C agreed to Z conditions.”
Transparency: Transparency helps others build accurate models of labs, their trustworthiness, and their future actions. This is especially important for labs that seek to receive support from specific communities, policymakers, or the general public. You have an opportunity to show the world how you reason about one of the most important safety-relevant topics.
We would be especially excited about statements that are written or endorsed by lab leadership. We would also be excited to see labs encourage employees to share their (personal) views on the requests for moratoriums.
Sometimes, silence is the best strategy. There may be attempts at coordination that are less likely to succeed if people transparently share their worldviews. If this is the case, we request that AI organizations make this clear (example: "We have decided to avoid issuing public statements about X for now, as we work on Y. We hope to provide an update within Z weeks.")
At the time of this post, the FLI letter has been signed by 7 DeepMind research scientists/engineers, probably 0 OpenAI research scientists and 0 Anthropic employees.
See also:
"1 OpenAI research scientist" felt weird to me on priors. 0 makes sense, if the company gave some guidance (e.g. legal) to not sign, or if the unanimous opinion was that it's a bad idea to sign. 7 makes sense too -- it's about what I'd expect from DeepMind and shows that there's a small contingent of people really worried about risk. Exactly 1 is really weird -- there are definitely multiple risk conscious people at OpenAI, but exactly one of them decided to sign?
I see a "Yonas Kassa" listed as an OpenAI research scientist, but it's very unclear who this person is. I don't see any LinkedIn or Google Scholar profile of this name associated with OpenAI. Previously, I know many of the signatures were inaccurate, so I wonder if this one is, too?
Anyway, my guess is that actually zero OpenAI researchers, and that both OpenAI and Anthropic employees have decided (as a collective? because of a top down directive? for legal reasons? I have no idea) to not sign.
There are actually 3 signatories now claiming to work for for OpenAI.