Kevin comments on To signal effectively, use a non-human, non-stoppable enforcer - Less Wrong

31 Post author: Clippy 22 May 2010 10:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

You are viewing a single comment's thread. Show more comments above.

Comment author: kodos96 22 May 2010 11:25:57PM 1 point [-]

Yeah, I've read through most of Clippy's posts.... what makes you so sure it's not Eliezer? Just that he's currently working on his book?

Comment author: Kevin 22 May 2010 11:51:24PM 5 points [-]

Clippy seems to be someone trying to make the point that a paperclip maximizer is not necessarily bad for the universe, where Eliezer uses a paperclip maximizer as the canonical example of how AGI could go horribly wrong. That's not necessarily good evidence that it isn't Eliezer, but Clippy's views are out of sync with Eliezer's views.

Comment author: Daniel_Burfoot 23 May 2010 01:23:38AM 20 points [-]

Eliezer's point is not that a paperclip maximizer is bad for the universe, it's that a superintelligent AGI paperclip maximizer is bad for the universe. Clippy's views here seem actually more similar to Robin's ideas that there is no reason for beings with radically divergent value systems not to live happily together and negotiate through trade.

Comment author: ata 23 May 2010 02:44:23AM *  14 points [-]

Clippy seems to be someone trying to make the point that a paperclip maximizer is not necessarily bad for the universe

That's exactly what a not-yet-superintelligent paperclip maximizer would want us to think.

(When Eliezer plays an AI in a box, the AI's views are probably out of sync with Eliezer's views too. There's no rule that says the AI has to be truthful in the AI Box experiment, because there's no such rule about AIs in reality. It's supposed to be maximally persuasive, and you're supposed to resist. If a paperclipper asserts x, then the right question to ask yourself is not "What should I do, given x?", but "Why does the paperclipper want me to believe x?" The most general answer, by definition, will be something like "Because the paperclipper is executing an elaborate plan to convert the universe into paperclips, and it believes that my believing x will further that goal to some small or large degree", which is at best orthogonal to "Because x is true", probably even anticorrelated with it, and almost certainly anticorrelated to "Because believing x will further my goals" if you are a human.)

Comment author: Nisan 23 May 2010 06:07:06AM 0 points [-]

If a paperclipper asserts x, then the right question to ask yourself is [...] "Why does the paperclipper want me to believe x?"

Or "Why does the paperclipper want me to believe it wants me to believe x?", or something with a couple extra layers of recursion.

Comment author: ata 23 May 2010 06:14:54AM *  12 points [-]

Or, to flatten the recursion out, "Why did the paperclipper assert x?".

(Tangential cognitive silly time: I notice that I feel literally racist saying things like this around Clippy.)

Comment author: kodos96 22 May 2010 11:55:25PM 4 points [-]

Clippy seems to be someone trying to make the point that a paperclip maximizer is not necessarily bad for the universe

Hmmm, I've read his entire posting history, and that's not the impression I got. I could be wrong though