Racing to the Precipice: a Model of Artificial Intelligence Development
by Stuart Armstrong, Nick Bostrom, and Carl Shulman
This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivised to finish first – by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and about their own), the more the danger increases.
Someone pointed out to me that probably we should calling superintelligence a possible "arms race". In an "arms race", you're competing to have a stronger force than the other person. You want to keep your nose in-front in case of a fight.
Developing superintelligence, on the other hand, is just a plain old race. A technology race. You simply want to get to the destination first.
(Likewise with developing the first nuke, which also involved arms but was not an arms race.)
Developing an AGI (and then ASI) will likely involve a serious of steps involving lower intelligences. There's already an AI arms race between several large technology companies and keeping your nose in front is already practiced because there's a lot of utility in having the best AI so far.
So it isn't true to say that it's simply a race without important intermediate steps. You don't just want to get to the destination first, you want to make sure your AI is the best for most of the race for a whole heap of reasons.