We need national security people across the world to be speaking publicly about this, otherwise the general discussion on threaats and risks of AI remains gravely incomplete and bias
Good insights! He has rhe right knowledge and dedication. Let’s hope he can grow into an Oppenheimer of AI, and that they’ll let him contribute on AI policy more than they let Oppenheimer on nuclear (see how his work on the 1946 Acheson–Lilienthal Report for international control of AI was then driven to nothing as it was taken up into the Baruch Plan)
The only winning move is “agreement”, not “not to play”. There is quite some difference.
But how to find an agreeement when so many parties are involved? Treaty-making has been failing miserably for nuclear and climate. So we need a much better treaty-making, perhaps that fo the open intergovernmental constituent assembly?