You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Broggly comments on Should LW have a public censorship policy? - Less Wrong Discussion

16 Post author: Bongo 11 December 2010 10:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Broggly 14 December 2010 05:46:00PM *  5 points [-]

Honestly I was suprised at EY's reaction. I thought he had figured out things like that problem and would tear it to pieces rather than become. Possibly I'm not as smart as him, but even presuming Roko's right you would think Rationalists Should Win. Plus, I think Eliezer has publicly published something similar to the Basilisk, albeit much weaker and without being explicitly basilisk like, so I'd have thought he would have worked out a solution. (EDIT: No, turns out it was someone else who came up with it. It wasn't really fleshed out so Eliezer may not have thought much of it or never noticed it in the first place.)

The fact that people are upset by it could be reason to hide it away, though, to protect the sensitive. Plus, having seen Dogma, I get that the post could be an existential risk...

Comment author: Kingreaper 14 December 2010 06:35:40PM 14 points [-]

The fact that people are upset by it could be reason to hide it away, though, to protect the sensitive.

I don't think hiding it will prevent people getting upset. In fact, hiding it may make people more likely to believe it, and thus get scared. If someone respects EY and EY says "this thing you've seen is a basilisk" then they're more likely to be scared than if EY says "this thing you've seen is nonsense"

Comment author: Vaniver 14 December 2010 06:46:31PM 8 points [-]

Plus, having seen Dogma, I get that the post could be an existential risk...

My understanding is that the post isn't the x-risk- a UFAI could think this up itself. The reaction to the post is supposedly an x-risk- if we let on we can be manipulated that way, then a UFAI can do extra harm.

But if you want to show that you won't be manipulated a certain way, it seems that the right way to do that is to tear that approach apart and demonstrate its silliness, not seek to erase it from the internet. I can't come up with a metric by which EY's approach is reasonable.

Comment author: wedrifid 14 December 2010 08:41:19PM *  3 points [-]

My understanding is that the post isn't the x-risk- a UFAI could think this up itself. The reaction to the post is supposedly an x-risk- if we let on we can be manipulated that way, then a UFAI can do extra harm.

(Concerns not necessarily limited to either existential or UFAI, but we cannot discuss that here.)

But if you want to show that you won't be manipulated a certain way, it seems that the right way to do that is to tear that approach apart and demonstrate its silliness, not seek to erase it from the internet. I can't come up with a metric by which EY's approach is reasonable.

Agree. :)

Comment author: Broggly 14 December 2010 07:58:31PM 1 point [-]

The reaction to the post is supposedly an x-risk

Yes, but not in the way you seem to be saying. I was semi-joking here, in that the post could spook people enough to increase x-risks (which wfg seems to be trying to do, albeit as blackmail rather than for its own sake). I was referring to how in the film Dogma gjb snyyra natryf, gb nibvq uryy, nggrzcg gb qrfgebl nyy ernyvgl. (rot13'd for spoilers, and in case it's too suggestive of the Basilisk)

if we let on we can be manipulated that way, then a UFAI can do extra harm.

It can? I suppose I just don't get decision theory. The non-basilisk part of that post left me pretty much baffled.