If we find ourselves in a world where ASI seems imminent and nations understand its implications, I'd predict that time will be more characterized by mutually assured cooperation rather than sabotage. One key reason for this is that if one nation is seen as leading the race and trying to grab a strategic monopoly via AI, both its allies and enemies will have similar incentives to pursue safety — via safety assurances or military action. There are quite a lot of agreeable safety assurances we can develop and negotiate (some of which you discuss in the paper...
Great! I hope you can come. I'll make sure that you'll be able to get in contact with any others who come even if you miss this meet.
Heya. I'm organizing a meetup, but to announce it here I seem to need some karma. Thanks.
To draw parallels from nuclear weapons: The START I nuclear disarmament treaty between the US and the USSR included some 12 different types of inspection, which included Russian inspectors at US sites and vice versa. We also have the International Atomic Energy Agency that coordinates various safety measures for inhibiting dangerous uses of nuclear technologies. There are many more techniques and agreements that were cooperatively deployed to improve outcomes.
With AI, we already have precursory elements for this in place, with for example the UK AISI evalu... (read more)