TheAncientGeek comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 13 May 2015 01:10:48PM *  0 points [-]

If you take the binary view that you're either smart enough to achieve your goals or not, then you might well want to stop improving when you have the minimum intelligence necessary to meet them...which means, among other things,that AIs with goals requiring human or lower intelligence won't become superhuman .... which lowers the probability of the Clippie scenario. It doesn't require huge intelligence to make paperclips,so an AI with a goal to make paperclips, but not to make any specific amount, wouldn't grow into a threatening monster.

The probability of the Clippie scenario is also lowered by the consideration that fine grained goals might shift during self-improvement phase, so the Clippie scenario .... arbitrary goals combined with a superintelligence .... is whittled away from both ends.