Here’s the first edition (on slowdown) of Navigating AI Risks, a newsletter on AI governance that we're launching with some colleagues.

This newsletter is mostly aimed at policymakers but I expect that it might interest some of you to keep up with the ideas that are around in AI governance. 

Here's a bullet point summary of it

  • Open letter, signed by Y. Bengio & S. Russell. Focused on the largest language models, no effects on most AI systems. 
  • Rationale: 
    • Avoid a race to the bottom
    • Let time for society to adapt (regarding white collar jobs vs incredible speed of AI development)
    • Develop basic laws and guardrails 
    • Some experts worry about existential risks from AI
    • Foreseeable misuse of AI (disinformation, large-scale hacking)
  • Proposals:
    • 6 month training pause
    • Shut down
    • Conditional slowdown, i.e. slowdown as long as there are safety failures and high risks for society. 
  • Difficulties:
    • Coordination is hard
    • China could benefit from it.
       
New Comment
3 comments, sorted by Click to highlight new comments since:

The link of "this is a linkpost for" is not the correct one

Question : what do you think of the opinion of the chinese officials on easily accessible LLM to chinese citizens? As long as alignment is unsolved, I can imagine china being extremely leery of how citizens could somehow be exposed to ideas that go against official propaganda (human rights, genocide, etc).

But china can't accept being left out of this race either is my guess.

So in the end china is incentivized to solve alignment or to as least slow down its progress.

Have you thought about any of this? I'm extremely curious about anyone's opinion on the matter.

Yes, I definitely think that countries with strong deontologies will try to solve some narrow versions of alignment harder than those that tolerate failures. 

I think it's quite reassuring and means that it's quite reasonable to focus on the US quite a lot in our governance approaches.