Tenoke comments on xkcd on the AI box experiment - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (229)
At this point I think the winning move is rolling with it and selling little plush basilisks as a MIRI fundraiser. It's our involuntary mascot, and we might as well 'reclaim' it in the social justice sense.
Then every time someone brings up "Less Wrong is terrified of the basilisk" we can just be like "Yes! Yes we are! Would you like to buy a plush one?" and everyone will appreciate our ability to laugh at ourselves, and they'll go back to whatever they were doing.
Blasphemy, our mascot is a paperclip.
I'd prefer a paperclip dispenser with something like "Paperclip Maximizer (version 0.1)" written on it.
But a plush paperclip would probably not hold its shape very well, and become a plush basilisk.
Close enough
I feel the need to switch from Nerd Mode to Dork Mode and ask:
Which would win in a fight, a basilisk or a paperclip maximizer?
Paperclip maximizer, obviously. Basilisks typically are static entities, and I'm not sure how you would go about making a credible anti-paperclip 'infohazard'.
That depends entirely on what the PM's code is. If it doesn't include input sanitizers, a buffer overflow attack could suffice as a basilisk. If your model of a PM basilisk is "Something that would constitute a logical argument that would harm a PM", then you're operating on a very limited understanding of basilisks.
The same way as an infohazard for any other intelligence: acausally threaten to destroy lots of paperclips, maybe even uncurl them, maybe even uncurl them while they were still holding a stack of pap-ARRRRGH I'LL DO WHATEVER YOU WANT JUST DON'T HURT THEM PLEASE