DanArmak comments on International cooperation vs. AI arms race - Less Wrong

15 Post author: Brian_Tomasik 05 December 2013 01:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (143)

You are viewing a single comment's thread.

Comment author: DanArmak 07 December 2013 12:22:12PM 2 points [-]

governments would botch the process by not realizing the risks at hand.

To be fair, so would private companies and individuals.

It's also possible that governments would use the AI for malevolent, totalitarian purposes.

It's less likely IMO that a government would launch a completely independent top secret AI project with the explicit goal of "take over and optimize existence", relying on FOOMing and first-mover advantage.

More likely, an existing highly funded arm of the government - the military, the intelligence service, the homeland department, the financial services - will try to build an AI that will be told to further their narrow goals. Starting from "build a superweapon", "spy on the enemy premier", "put down a revolution", "fix the economy", all the way to "destroy all other militaries", "gather all information", "control all citizens", and "control all money".

In such a scenario, the AI not only won't be told to optimize for "all people" or "all nations", but it won't even be told to optimize for "all interests of our country".

Comment author: Brian_Tomasik 07 December 2013 05:55:09PM 0 points [-]

To be fair, so would private companies and individuals.

Yes, perhaps more so. :) The main point in the post was that risks of botching the process increase in a competitive scenario where you're pressed for time.