Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Open thread, Jul. 17 - Jul. 23, 2017 - Less Wrong Discussion

1 Post author: MrMind 17 July 2017 08:15AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 17 July 2017 11:30:43PM *  0 points [-]

In your case, a force is needed to actually push most of organisations to participate in such project, and the worst ones - which want to make AI first to take over the world - will not participate in it. IAEA is an example of such organisation, but it was not able to stop North Korea to create its nukes.

Because of above you need powerful enforcement agency above your AI agency. It could either use conventional weapons, mostly nukes, or some form of narrow AI, to predict where strong AI is created - or both. Basically, it means the creation of the world government, design especially to contain AI.

It is improbable in the current world, as nobody will create world government mandated to nuke AI labs, based only reading Bostrom and EY books. The only chance for its creation is if some very spectacular AI accident happens, like hacking of 1000 airplanes and crashing them in 1000 nuclear plants using narrow AI with some machine learning capabilities. In that case, global ban of AI seems possible.