It might mollify people who disagree with the current implicit policy, and make discussion about the policy easier. Here's one option:

There's a single specific topic that's banned because the moderators consider it a Basilisk. You won't come up with it yourself, don't worry. Posts talking about the topic in too much detail will be deleted. 

One requirement would be that the policy be no more and no less vague than needed for safety.

Discuss.

New Comment
42 comments, sorted by Click to highlight new comments since:

WikiLisks is a non-karma media organization dedicated to bringing important existential risks to the public. We provide an innovative, secure, and acausal way for independent AIs around the multiverse to provide information about their utility functions.

wedrifid, is that you?

Dammit. It should be!

:P

PROPOSAL: To outline a clear set of directions for the really creative trolls.

This could be just the first in a whole series of basilisks! The written equivalent of Goatse! And its successors!

I can't see what could possibly go wrong with this idea.

Of course, if you put something like that in the FAQ or about page, right there for eager new members to read, a lot of them are bound to go looking for it. Forbidden knowledge is tantalizing.

As an eager new member who did exactly that, I have to say I don't see what the big fuss is about. It seems to be one big case of privileging the hypothesis.

I'd like to have some way to discuss the basilisk with others who have already seen it, if it would be possible to do so without the forbidden-fruit problem.

Some aspects of the basilisk are currently being discussed and have been discussed even before the original basilisk was proposed. You just need to connect the dots and be able to recognize them.

I don't see the purpose of such discussion. All the posts which are not criticising Roko's argument will be downvoted into oblivion. That's not a discussion, it's a monologue. The only aspect of this mess worth discussing any more is the censorship itself.

(I think it was uncalled for. We have downvoting for a reason.)

'Oblivion' is not true oblivion. Heavily downvoted comments are still visible if you look. I found myself peeking at negative karma comments so often that I have simply eliminated the visibility threshold.

True. Still, it's an incentive not to make posts that will negatively impact your karma.

[-]Eneasz150

At this point I think we need to TVTropes this subject. Roko's Rule: At least once per quarter someone will recommend hiding the Basilisk, which will introduce it to a whole new generation of readers.

Why do we even bother making oblique references to it anymore? It should be fully explained in the FAQ/Wiki and included in the introductory post.

You are being humorous, but here is the answer to your question: People are talking about it obliquely because they want to talk about it openly, but don't believe they can, without having their discussions disappear.

LW is not a police state. Discussions are free and fearless, except for this one thing. And of course that makes people even more curious to test the boundaries and understand why, on this one topic, the otherwise sensible moderators think that "you can't handle the truth".

We can seek a very loose historical analogy in the early days of nanotechnology. Somewhere I read that for several years, Eric Drexler was inhibited in talking about the concept, because he feared nanotechnology's destructive side. I don't know what actually happened at all, so let 's just be completely hypothetical. It's the early 1970s, and you're part of a little group who stumbled upon the idea of molecular machines. There are arguments that such machines could make abundance and immortality possible. There are also arguments that such machines could destroy the world. In the group, there are people who want to tell the world about nanotechnology, because of the first possibility; there are people who want to keep it all a secret, because of the second possibility; and there are people who are undecided or with intermediate positions.

Now suppose we ask the question: Are the world-destroying nanomachines even possible? The nano-secrecy faction would want to inhibit public consideration of that question. But the nano-missionary faction might want to encourage such discussion, either to help the nano-secrecy faction get over its fears, or just to make further secrecy impossible.

In such a situation, it would be very easy for the little group of nano-pioneers to get twisted and conflicted over this topic, in a way which to an outsider would look like a collective neurosis. The key structural element is that there is no-one outside the group presently competent to answer the question of whether the world-destroying nanomachines are physically possible. If they went to an engineer or a physicist or a chemist, first they would have to explain the problem - introduce the concept of a nanomachine, then the concept of a world-destroying nanomachine - before this external authority could begin to solve it.

The deep reason why LW has this nervous tic when it comes to discussion of the forbidden topic, is that it is bound up with a theoretical preoccupation of the moderators, namely, acausal decision theory.

In my 1970s scenario, the nano-pioneers believe that the only way to know whether grey goo is physically possible or not is to develop the true (physically correct) theory of possible nanomachines; and the nano-secrecy faction believes that, until this is done, the safe course of action is to avoid discussing the details in public.

Analogously, it seems that here in the real world of the 2010s, the handful of people on this site who are working to develop a formal acausal decision theory believe that the only way to know whether [scary idea] is actually possible, is to finish developing the theory; and a pro-secrecy faction has the upper hand on how to deal with the issue publicly until that is done.

Returning to the hypothetical scenario of the nano-pioneers, one can imagine the nano-secrecy faction also arguing for secrecy on the grounds that some people find the idea of grey goo terrifying or distressing. In the present situation, that is analogous to the argument for censorship on the grounds that [scary idea] has indeed scared some people. In both cases, it's even a little convenient - for the pro-secrecy faction - to have public discussion focus on this point, because it directs people away from the conceptual root of the problem.

In my opinion, unlike grey goo, the scary idea arising from acausal decision theory is an illusion, and the theorists who are afraid of it and cautious about discussing it are actually retarding the development of the theory. If they were to state, publicly, completely, and to the best of their ability, what it is that they're so afraid of, I believe the rest of us would be able to demonstrate that, in the terminology of JoshuaZ, there is no basilisk, there's only a pseudo-basilisk, at least for human beings.

Well, that was a much more in-depth reply than I was expecting. I had actually been trying to point out that any pro-censorship person who spoke about this idea, ever, for any reason, even to justify the censorship, was actually slitting their own wrists by magnifying it's exposure. But this was a very interesting reply, sparked some new thoughts for me. Thanks!

I love this post

Woo! In an awesome display of confirmation bias, I've found another application of Roko's Rule less than 24 hours since the coinage! Go Chipotle! http://www.sogoodblog.com/2010/12/14/chipotle-social-media/ :)

Hm. Probably.

Given the difficulty of genuine censorship, it might be better merely to outline the risks (which can be put scarily) and encrypt it (maybe rot-n, so that it takes a little more work to break but is still doable for everyone).

Frankly, actually censoring things just invites the Streisand effect.

  1. The reasoning given for banning the "dangerous topic" is, to put it bluntly, irrational. Also, the manner in which it was made was appalling. It is hard to say more without revealing the topic, so take this as my opinion FWIW... but also:

  2. The "dangerous topic" is basically in public domain now, so further censorship is pointless.

Goddamnit, I want to stare at this basilisk!

Outside view indicates that if you stare at the basilisk, you will most likely either a) think it was a terrible idea and wish you hadn't and maybe have nightmares, or b) wonder what the heck all the fuss is about and consider it a waste of time except insofar as you might consider censorship wrong in itself, and might thereby be tempted to share the basilisk with others, each of whom has an independent risk of suffering reaction (a).

Do you want to want to stare at the basilisk?

I'll add my opinion to the list:

I'm not an a, or a b.

Turns out, the basilisk was very close to one of the list of things I'd thought up, based on the nature of this community's elders, and gone "no, no, they wouldn't buy into that idea would they? No-one here would fall for that...".

Reading about it, combined with the knowledge that EY banned it, gives me an insight into EYs thought patterns that significantly decreases my respect for him. I think that that insight was worth the effort involved in reading it.

Honestly I was suprised at EY's reaction. I thought he had figured out things like that problem and would tear it to pieces rather than become. Possibly I'm not as smart as him, but even presuming Roko's right you would think Rationalists Should Win. Plus, I think Eliezer has publicly published something similar to the Basilisk, albeit much weaker and without being explicitly basilisk like, so I'd have thought he would have worked out a solution. (EDIT: No, turns out it was someone else who came up with it. It wasn't really fleshed out so Eliezer may not have thought much of it or never noticed it in the first place.)

The fact that people are upset by it could be reason to hide it away, though, to protect the sensitive. Plus, having seen Dogma, I get that the post could be an existential risk...

The fact that people are upset by it could be reason to hide it away, though, to protect the sensitive.

I don't think hiding it will prevent people getting upset. In fact, hiding it may make people more likely to believe it, and thus get scared. If someone respects EY and EY says "this thing you've seen is a basilisk" then they're more likely to be scared than if EY says "this thing you've seen is nonsense"

Plus, having seen Dogma, I get that the post could be an existential risk...

My understanding is that the post isn't the x-risk- a UFAI could think this up itself. The reaction to the post is supposedly an x-risk- if we let on we can be manipulated that way, then a UFAI can do extra harm.

But if you want to show that you won't be manipulated a certain way, it seems that the right way to do that is to tear that approach apart and demonstrate its silliness, not seek to erase it from the internet. I can't come up with a metric by which EY's approach is reasonable.

My understanding is that the post isn't the x-risk- a UFAI could think this up itself. The reaction to the post is supposedly an x-risk- if we let on we can be manipulated that way, then a UFAI can do extra harm.

(Concerns not necessarily limited to either existential or UFAI, but we cannot discuss that here.)

But if you want to show that you won't be manipulated a certain way, it seems that the right way to do that is to tear that approach apart and demonstrate its silliness, not seek to erase it from the internet. I can't come up with a metric by which EY's approach is reasonable.

Agree. :)

The reaction to the post is supposedly an x-risk

Yes, but not in the way you seem to be saying. I was semi-joking here, in that the post could spook people enough to increase x-risks (which wfg seems to be trying to do, albeit as blackmail rather than for its own sake). I was referring to how in the film Dogma gjb snyyra natryf, gb nibvq uryy, nggrzcg gb qrfgebl nyy ernyvgl. (rot13'd for spoilers, and in case it's too suggestive of the Basilisk)

if we let on we can be manipulated that way, then a UFAI can do extra harm.

It can? I suppose I just don't get decision theory. The non-basilisk part of that post left me pretty much baffled.

"Do you want to know?" whispered the guide; a whisper nearly as loud as an ordinary voice, but not revealing the slightest hint of gender.

Brennan paused. The answer to the question seemed suspiciously, indeed extraordinarily obvious, even for ritual.

"Yes, provided that * * ** ** ** ** * * * ** ** ** * * ** * *," Brennan said finally.

"Who told you to say that?", hissed the guide.

[-]Roko100

Brennan is a fucking retard. No, you don't want to know. You want to signal affiliation with desirable groups, to send hard-to-fake signals of desirable presonality traits such as loyalty, intelligence, power and the presence of informed allies. You want to say everything bad you possibly can about the outgroup and everything good about the ingroup. You want to preech altruism and then make a plausible but unlikely reasoning error which conveniently stops you from having to give away anything costly.

All the other humans do all of these things. This is the true way of our kind. You will be punished if you deviate from the way, or even if you try to overtly mention that this is the way.

This may be the way now, but it doesn't have to be the way always.

You seem to be talking mainly in part (a) about the pseudo-basilisk rather than the basilisk itself. I suspect that most people who are vulnerable to the pseudo-basilisk are either mentally ill or vulnerable to having similar issues simply when thinking about the long-term implications of the second law of thermodynamics or the like. If one is strongly vulnerable to that sort of disturbing idea then between known laws of physics and nasty low probability claims made by some major religions, most basiliking of this sort is already well-covered.

Minus the censorship part, that's not worse than watching Saw.

One can receive partial-impact synopses of Saw without risking the full effect, and gauge their susceptibility with more information on hand.

There's a reason I've refrained from seeking out 2 Girls 1 Cup.

(I should stop bringing it into my mind, really.)

True. I think that after reading the debate(s) about the censored post one should have a pretty good idea of what it is, though.

[-][anonymous]00

My own reaction was

c) More.

Yes, I know. I'm hopelessly stupid.

[This comment is no longer endorsed by its author]Reply
[a lesser light asks Eliezer //
What are the activities of an FAI>//
Eliezer answers //
I have not the slightest idea //
The dim light then says //
Why haven't you any idea>//
Eliezer replies//
I just want to keep my no-idea]

(With apologies to the original.)

EDIT: I might choose to un-look at it myself

Now I am confused. You can do that?

No, I mean that I might if I had the opportunity. I would like that option, but I can't. Sorry for the confusing wording.

So Eliezer decided to draw attention to this scenario as a trinity knot joke? Well done, sir!

[-][anonymous]00

The only interesting piece of information I pulled from that is that in the time between EY posting to announce his intention to censor the post and his clicking the "delete" button, he got at least 3 points of positive karma (depending on how long it was between the page being cached and deleted). The basilisk must have made an impact.

[-][anonymous]-10

If it's dangerous for information to be merely written (say a link to an extremist website, or staggeringly good source code for sentient AI), it should not be posted here.

If it's obvious by widely accepted priors that an idea will harm the reader, then it should not be posted.

If the idea might be the above and you just don't know, it should be discussed in a limited circle of masochists, until they decide how obvious it is. Find the circle using the method below, but with as much precaution as possible.

If the idea's harmfulness depends on priors which vary across the community, it should be protected.

An adequate test for a protected idea: A series of questions, or even of one-on-one discussions, which test the odds an individual puts on all relevant priors. (Of course, the test should be clever enough to not give away the idea itself!) Anyone who would predictably regret knowing the idea should be told so.

If they still insist, give a second, more clear warning about the predicted effects of the idea on the person, then let them have the key. Or require that the person states why he or she wants to take this risk, and have a moderator decide.