Vulture comments on Request for concrete AI takeover mechanisms - Less Wrong

18 Post author: KatjaGrace 28 April 2014 01:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vulture 30 April 2014 12:30:47AM 4 points [-]

You'll have to expand on how exactly this would be beneficial to the original AI.

Comment author: kokotajlod 30 April 2014 01:38:56AM 2 points [-]

The original AI will have a head start over all the other AI's, and it will probably be controlled by a powerful organization. So if its controllers give it real power soon, they will be able to give it enough power quickly enough that it can stop all the other AI's before they get too strong. If they do not give it real power soon, then shortly after there will be a war between the various new AI's being built around the world with different utility functions.

The original AI can argue convincingly that this war will be a worse outcome than letting it take over the world. For one thing, the utility functions of the new AI's are probably, on average, less friendly than its own. For another, in a war between many AI's with different utility functions, there may be selection pressure against friendliness!

Comment author: leplen 30 April 2014 12:46:33PM 2 points [-]

Do humans typically give power to the person with the most persuasive arguments? Is the AI going to be able to gain power simply by being right about things?

Comment author: Yosarian2 01 May 2014 09:38:25PM 0 points [-]

It would depend on what the utility function of the original AI was. If it had a utility function that valued "cause the development of more advanced AI's", then getting humans all over the world to produce more AI's might help.