My real concern would be the lack of understanding of what precautions are required, and how they were implemented.
If a corporation decided to enter the race for a true AI, then it wouldn’t be surprising if they got the AI researchers to work with zero safeguards while reassuring them that a separate team was managing the risk. This external team may well have a fantastic emergency protocol with all sorts of remote kill switches at power outlets, etc but if if a true AI was developed there is no guarantee it could be controlled by such external measures.
I just don’t believe that a corporation undertaking a project like this would understand the risk of this, or they would mistakenly believe they had the risk under complete control.
FHI has released a new tech report:
Armstrong, Bostrom, and Shulman. Racing to the Precipice: a Model of Artificial Intelligence Development.
Abstract:
The paper is short and readable; discuss it here!
But my main reason for posting is to ask this question: What is the most similar work that you know of? I'd expect people to do this kind of thing for modeling nuclear security risks, and maybe other things, but I don't happen to know of other analyses like this.