I recently wrote an essay about AI risk, targeted at other academics:
Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems
I think it might be interesting to some of you, so I am sharing it here. I would appreciate any feedback any of you have, especially from others who do AI / machine learning research.
Isn't the arms race a safeguard? If multiple AIs of similar intelligence are competing it is difficult for any one of them to completely outsmart all the others and take over the world.
An A.I. arms race might cause researchers or sponsors to commit a number of inadvisable actions. These sorts of political concepts are discussed well in Bostrom's book, Superintelligence: Path, Dangers, Strategies, but can be summed up as follows:
Due to the fact that moderate and fast takeoffs are more likely than slow ones, any project that achieves it's goals is likely to gain a decisive strategic advantage over other projects, meaning they lose.
Thus, if a given project is not in the lead, it might start lessening it's safety protocol in favor of speed (not to mention standard cloak and dagger actions, or even militaristic scenarios). Is not good, gets extinction.