You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Brillyant comments on xkcd on the AI box experiment - Less Wrong Discussion

15 Post author: FiftyTwo 21 November 2014 08:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: Brillyant 21 November 2014 07:14:07PM 6 points [-]

In his defense, is it possible EY can't win at this point, regardless of his approach? Maybe the internet has grabbed this thing and the PR whirlwinds are going to do with it whatever they like?

I've read apologies from EY where he seems to admit pretty clearly he screwed up. He comes off as defensive and pissy sometimes in my opinion, but he seems sincerely irked about how RW and other outlets have twisted to whole story to discredit LW and himself. From my recall, one comment he made on the reddit sub dedicated to his HP fanfic indicated he was very hurt by the whole kerfuffle, in addition to his obvious frustration.

Comment author: alexanderwales 21 November 2014 07:40:44PM 15 points [-]

It's not a matter of "winning" or "not winning". The phrase "damage control" was coined for a reason - it's not about reversing the damage, it's about making sure that the damage gets handled properly.

So seen through that lens, the question is whether EY is doing a good or bad job of controlling the damage. I personally think that having a page on Less Wrong that explains (and defangs) the Basilisk, along with his reaction to it and why that reaction was wrong (and all done with no jargon or big words for when it gets linked from somewhere, and also all done without any sarcasm, frustration, hurt feelings, accusations, or defensiveness) would be the first best step. I can tell he's trying, but think that with the knowledge that the Basilisk is going to be talked about for years to come a standardized, tone-controlled, centralized, and readily accessible response is warranted.

Comment author: Brillyant 21 November 2014 09:14:45PM 3 points [-]

I am defining winning as damage control. EY has been trying to control the damage, and in that pursuit, I'm starting to wonder if damage control, to the extent it could be considered successful by many people, is even possible.

He's a public figure + He made a mistake = People are going to try and get mileage out of this, no matter how he handles it. That's very predictable.

Further, it's very easy to come along after the fact and say, "he should have done this and all the bad press could have been avoided!"

A page on LW might work. Or it might be more fodder for critics. If there were an easy answer to how to win via damage control, then in wouldn't be quite as tricky as it always seems to be.

Comment author: alexanderwales 21 November 2014 09:35:29PM 6 points [-]

It's still a matter of limiting the mileage. Even if there is no formalized and ready-to-fire response (one that hasn't been written in the heat of the moment), there's always an option not to engage. Which is what I said last time he engaged, and before he engaged this time (and also after the fact). If you engage, you get stuff like this post to /r/SubredditDrama, and comments about thin skin that not even Yudkowsky really disagrees with.

It doesn't take hindsight (or even that much knowledge of human psychology and/or public relations) to see that making a twelve paragraph comment about RationalWiki absent anyone bringing RationalWiki up is not an optimal damage control strategy.

And if you posit that there's no point to damage control, why even make a comment like that?

Comment author: Brillyant 21 November 2014 10:04:57PM 1 point [-]

I didn't posit there is no point to damage control. I'm saying that in certain cases, people are criticized equally no matter what they do.

If someone chooses not to engage, they are hiding something. If they engage, they are giving the inquisitor what he wants. If they jest about their mistake, they are not remorseful. If they are somber, they are taking it too seriously and making things worse.

I read your links and...yikes...this new round of responses is pretty bad. I guess part of me feels bad for EY. It was a mistake. He's human. The internet is ruthless...

Comment author: alexgieg 22 November 2014 02:59:26AM 13 points [-]

Let me chime in briefly. The way EY handles this issue tends to be bad as a rule. This is a blind spot in his otherwise brilliant, well, everything.

A recent example: a few months ago a bunch of members of the official Less Wrong group on Facebook were banished and blocked from viewing it without receiving a single warning. Several among them, myself included, had one thing in common: participation in threads about the Slate article.

I myself didn't care much about it. Participation in that group wasn't a huge part of my Facebook life, although admittedly it was informative. The point is just that doing things like these, and continuing to do things like these, accrete a bad reputation around EY.

It really amazes me he has so much difficulty calibrating for the Streisand Effect.

Comment author: Eliezer_Yudkowsky 23 November 2014 02:03:15AM 1 point [-]

That was part of a brief effort on my part to ban everyone making stupid comments within the LW Facebook Group, which I hadn't actually realized existed but which I was informed was giving people terrible impressions. I deleted multiple posts and banned all commenters who I thought had made stupid comments on them; the "hur hur basilisk mockery" crowd was only one, but I think a perfectly legitimate target for this general sweep. It's still a pretty low-quality group, but it's a lot better than it was before I went through and banned everyone who I saw making more than one stupid comment.

Unfortunately Facebook doesn't seem to have an easy "delete comment and ban commenter from Group" procedure for Android, which makes it harder to repeat this procedure because Android is most of where I check Facebook.

Comment author: ChristianKl 23 November 2014 04:08:32PM 18 points [-]

Going around and banning people without explaining to then why you ban them is in general a good way to make enemies.

The fallout of the basilisk incidence, it should have taught you that censorship has costs.

The timing of the sweeping and the discussion about the basilisk article are also awfully coincidental.

Comment author: cesoid 23 November 2014 03:06:28AM 10 points [-]

What does "stupid" refer to in this context? Does it mean the comments were unintelligent? Not quite intelligent enough? Mean? Derailing discussion? I'm asking because there are certainly some criteria where the banning and deleting would leave a worse impression than the original comments, and I'm thinking that the equilibrium may be surprisingly in the direction of the more obnoxious comments. Especially since the banning and deleting is being done by someone who is more identified with LW than likely were any of the commenters.

Comment author: polymathwannabe 23 November 2014 01:27:36PM *  7 points [-]

Thanks for letting us know what happened. I'm one of the Facebook members who were banned, and I've spent these months wondering what I might have done wrong. May I at least know what was the stupid thing I said? And is there any atonement procedure to get back in the Facebook group?

Comment author: Omid 23 November 2014 02:48:28PM 1 point [-]

So just to be clear: If I say "I won't give into the basalisk because Eliezer says I shouldn't", will that protect me from the basilisk? If not, what should I do?

Comment author: TobyBartels 23 November 2014 10:52:49PM 1 point [-]

If you believe Eliezer, then you may believe him that the basilisk has ~0 probability of occurring. (I should find a citation for that, but I read it just a few minutes ago, somewhere around the discussion of this xkcd comic.) So you are already protected from it, because it does not exist (not even in ways relevant to acausal trade).

More broadly, you should decide to take this approach: never give into blackmail by somebody who knows that you have decided to take this approach. Now they have no incentive to blackmail you, and you are safe, even if they do exist! (I think that the strategy in this paragraph has been endorsed by Eliezer, but don't trust me on that until you get a citation. Until then, you'll have to reason it out for yourself.)

Comment author: Jiro 26 November 2014 10:17:51PM 2 points [-]

Now they have no incentive to blackmail you, and you are safe, even if they do exist!

How does that work if they precommit to blackmail even when there is no incentive (which benefits them by making the blackmail more effective)?

Comment author: ThisSpaceAvailable 26 November 2014 09:09:59AM 0 points [-]

By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.

Comment author: wedrifid 26 November 2014 11:46:57AM -1 points [-]

By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.

Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).

Comment author: Yvain 22 November 2014 02:39:54AM *  51 points [-]

At this point I think the winning move is rolling with it and selling little plush basilisks as a MIRI fundraiser. It's our involuntary mascot, and we might as well 'reclaim' it in the social justice sense.

Then every time someone brings up "Less Wrong is terrified of the basilisk" we can just be like "Yes! Yes we are! Would you like to buy a plush one?" and everyone will appreciate our ability to laugh at ourselves, and they'll go back to whatever they were doing.

Comment author: Brillyant 22 November 2014 02:51:13AM 8 points [-]

Hm. Turn your weakness into a plush toy then sell it to raise money and disarm your critics. Winning.

Comment author: chaosmage 22 November 2014 11:08:24AM 7 points [-]

Excellent idea. I would buy that, especially if it has a really bizarre design.

I'd like merchandise-based tribal allegiance membership signalling items anyway. Anyone selling MIRI mugs or LessWrong T-shirts can expect money from me.

Comment author: Tenoke 22 November 2014 10:41:35AM 9 points [-]

Blasphemy, our mascot is a paperclip.

Comment author: chaosmage 22 November 2014 11:11:24AM 11 points [-]

I'd prefer a paperclip dispenser with something like "Paperclip Maximizer (version 0.1)" written on it.

Comment author: philh 22 November 2014 03:15:38PM 4 points [-]

But a plush paperclip would probably not hold its shape very well, and become a plush basilisk.

Comment author: Tenoke 22 November 2014 05:38:15PM 26 points [-]

Close enough

Comment author: Error 22 November 2014 10:06:08PM 2 points [-]

I feel the need to switch from Nerd Mode to Dork Mode and ask:

Which would win in a fight, a basilisk or a paperclip maximizer?

Comment author: Dallas 22 November 2014 11:21:38PM 0 points [-]

Paperclip maximizer, obviously. Basilisks typically are static entities, and I'm not sure how you would go about making a credible anti-paperclip 'infohazard'.

Comment author: ThisSpaceAvailable 26 November 2014 09:00:20AM 3 points [-]

That depends entirely on what the PM's code is. If it doesn't include input sanitizers, a buffer overflow attack could suffice as a basilisk. If your model of a PM basilisk is "Something that would constitute a logical argument that would harm a PM", then you're operating on a very limited understanding of basilisks.

Comment author: lmm 25 November 2014 10:52:57PM 3 points [-]

The same way as an infohazard for any other intelligence: acausally threaten to destroy lots of paperclips, maybe even uncurl them, maybe even uncurl them while they were still holding a stack of pap-ARRRRGH I'LL DO WHATEVER YOU WANT JUST DON'T HURT THEM PLEASE

Comment author: ThisSpaceAvailable 26 November 2014 09:03:35AM *  3 points [-]

"selling little plush basilisks as a MIRI fundraiser."

By "selling", do you mean giving basilisks to people who give money? It seems like a more appropriate policy would be giving a plush basilisk to anyone who doesn't give money.

Comment author: Lumifer 26 November 2014 04:06:48PM 3 points [-]

It seems like a more appropriate policy would be giving a plush basilisk to anyone who doesn't give money.

Sound like the first step a Plush Basilisk Maximizer would take... :-D

Comment author: Error 22 November 2014 10:05:28PM 1 point [-]

I'd buy this. We can always use more stuffies.

Comment author: TheMajor 23 November 2014 09:15:17AM 0 points [-]

Yes, brilliant idea!

Comment author: Ishaan 23 November 2014 10:57:47AM *  0 points [-]

It should be a snake, only with little flashing LEDs in its eyes.

The canonical basilisk paralyzes you if you look at it. Flickering lights carry the danger of triggering photosensitive epilepsy, and thus are sort of real-life basilisks. Even if the epilepsy reference is lost on many, it's still clearly a giant snake thing with weird eyes and importantly you can probably get from somewhere without having to custom make them.

(AFAIK Little LEDs should be too small to actually represent a threat to epileptics, and it shouldn't be any worse than any of the other flickering lights.)

EDIT: Eh, I suppose it could also be stuffed with paperclips or something, if we want to pack as many memes in as possible.

Comment author: Locaha 22 November 2014 11:03:27AM -1 points [-]

We can save money by re-coloring the plush Cthulhu. It's basically the same, right? :-)

Comment author: FiftyTwo 23 November 2014 12:09:23PM 4 points [-]

alternatively sell empty boxes labelled "Don't look!"

Comment author: Lumifer 22 November 2014 02:25:51AM 7 points [-]

In his defense, is it possible EY can't win at this point, regardless of his approach?

Maybe so, but he can lose in a variety of ways and some of them are much worse than others.

Comment author: ChristianKl 23 November 2014 12:53:13PM 4 points [-]

I've read apologies from EY where he seems to admit pretty clearly he screwed up.

But he did still continue to delete basilisk related discussion afterwards. As far as I understand he never apologized to Roko for deleting the post or wrote an LW post apologizing.