Tenoke comments on xkcd on the AI box experiment - Less Wrong

15 Post author: FiftyTwo 21 November 2014 08:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tenoke 22 November 2014 10:41:35AM 9 points [-]

Blasphemy, our mascot is a paperclip.

Comment author: chaosmage 22 November 2014 11:11:24AM 11 points [-]

I'd prefer a paperclip dispenser with something like "Paperclip Maximizer (version 0.1)" written on it.

Comment author: philh 22 November 2014 03:15:38PM 4 points [-]

But a plush paperclip would probably not hold its shape very well, and become a plush basilisk.

Comment author: Tenoke 22 November 2014 05:38:15PM 26 points [-]

Close enough

Comment author: Error 22 November 2014 10:06:08PM 2 points [-]

I feel the need to switch from Nerd Mode to Dork Mode and ask:

Which would win in a fight, a basilisk or a paperclip maximizer?

Comment author: Dallas 22 November 2014 11:21:38PM 0 points [-]

Paperclip maximizer, obviously. Basilisks typically are static entities, and I'm not sure how you would go about making a credible anti-paperclip 'infohazard'.

Comment author: ThisSpaceAvailable 26 November 2014 09:00:20AM 3 points [-]

That depends entirely on what the PM's code is. If it doesn't include input sanitizers, a buffer overflow attack could suffice as a basilisk. If your model of a PM basilisk is "Something that would constitute a logical argument that would harm a PM", then you're operating on a very limited understanding of basilisks.

Comment author: lmm 25 November 2014 10:52:57PM 3 points [-]

The same way as an infohazard for any other intelligence: acausally threaten to destroy lots of paperclips, maybe even uncurl them, maybe even uncurl them while they were still holding a stack of pap-ARRRRGH I'LL DO WHATEVER YOU WANT JUST DON'T HURT THEM PLEASE