TheOtherDave comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 23 April 2012 04:17:05PM 4 points [-]

he's saying that AIs will be more adaptive than humans

Which is true, but he is saying that will extend to the AI being more morally confused than humans as well, which they have no reason to be (and much reason to self modify to not be (see Bostrom's stuff))

which means that it would be able to work around whatever limitations we imposed on it, which in turn makes it unlikely that we can impose any kind of stable "friendliness" restrictions on it.

The AI has no incentive to corrupt its own goal architecture. That action is equivalent to suicide. The AI is not going to step outside of itself and say "hmm, maybe I should stop caring about paperclips and care about safety pins instead"; that would not maximize paperclips.

Friendliness is not "restrictions". Restricting an AI is impossible. Friendliness is giving it goals that are good for us, and making sure the AI is initially sophisticated enough to not fall into any deep mathematical paradoxes while evaluating the above argument.

Comment author: TheOtherDave 23 April 2012 05:01:21PM 3 points [-]

Restricting an AI is impossible.

For certain very specialized definitions of AI. Restricting an AI that has roughly the optimizing and self-optimizing power of a chimpanzee, for example, might well be possible.