Richard_Loosemore comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: Richard_Loosemore 05 May 2015 06:42:46PM 4 points [-]

You answered your own question when you said:

The AI is going to stupid, but it's going to quickly find out how to turn the world into paperclips. It's not going to be a general intelligence. But it doesn't have to be to cause problems.

Incorrect. It very much DOES have to be a general intelligence, and far from stupid, if it is going to be smart enough to evade the efforts of humanity to squelch it. That really is the whole point behind all of these scenarios. It has to be an existential threat, or it will just be a matter of someone walking up to it and pulling the power cord when it is distracted by a nice juicy batch of paper-clip steel that someone tempts it with.

Or, as Rick Deckard might have said:

"If it's an idiot, it's not my problem"

Comment author: TheAncientGeek 06 May 2015 07:47:02AM *  0 points [-]

Its got to be smart enough to understood the difference between real paoerclips and fake signals on its input channels, 'smileys".