You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RaelwayScot comments on Open Thread, January 11-17, 2016 - Less Wrong Discussion

3 Post author: username2 12 January 2016 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (180)

You are viewing a single comment's thread. Show more comments above.

Comment author: RaelwayScot 12 January 2016 05:46:30PM *  0 points [-]

I think many people intuitively distrust the idea that an AI could be intelligent enough to transform matter into paperclips in creative ways, but 'not intelligent enough' to understand its goals in a human and cultural context (i.e. to satisfy the needs of the business owners of the paperclip factory). This is often due to the confusion that the paperclip maximizer would get its goal function from parsing the sentence "make paperclips", rather from a preprogrammed reward function, for example a CNN that is trained to map the number of paperclips in images to a scalar reward.

Comment author: gjm 12 January 2016 06:22:48PM 0 points [-]

Could well be. Does that have anything to do with pattern-matching AI risk to SF, though?

Comment author: RaelwayScot 12 January 2016 07:27:22PM 1 point [-]

Just speaking of weaknesses of the paperclip maximizer though experiment. I've seen this misunderstanding at least 4 out of 10 times that the thought experiment was brought up.