Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.

**(1)** In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):

I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b)

give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail,though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)

…and further…

For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that

he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

His comment indicates that he doesn’t believe that this could currently work. Yet he also does not seem to dismiss some current and future danger. Why didn’t he clearly state that there is nothing to worry about?

**(2)** The following comment by Mitchell Porter, to which Yudkowsky replies “This part is all correct AFAICT.”:

It’s clear that the basilisk was censored, not just to save unlucky susceptible people from the trauma of imagining that they were being acausally blackmailed, but because Eliezer judged that acausal blackmail might actually be possible. The thinking was: maybe it’s possible, maybe it’s not, but it’s bad enough and possible enough that the idea should be squelched, lest some of the readers actually stumble into an abusive acausal relationship with a distant evil AI.

If Yudkowsky really thought it was irrational to worry about any part of it, why didn't he allow people to discuss it on LessWrong, where he and others could debunk it?

Assume that each player's hand may tremble with a small non-zero probability

p, then take the limit aspapproaches zero from above.*10 points [-]... Let's do that!

Simple model: A plays A, B and C with probabilities a, b, and c, with the constraint that each must be above the trembling probability t (=p/3 using the p above). (Two doesn't tremble for simplicity's sake)

Two picks X with probability x and Y with probability (1-x).

So their expected utilities are:

One: 3a + 2b+6c(1-x)

Two: 2b(1-x) + cx = 2*b + (c - 2b) x

It seems pretty clear that One wants b to be as low as possible (either a or c will always be better), so we can set b=t.

So One's utility is (constant) - 3c+6c -6cx

So One wants c to maximize (1-2x)c, and Two wants x to maximize (c-2t)c

The Nash equilibrium is at 1-2x=0 and c-2t=0, so c=2t and x=0.5

So in other words, if One's hand can tremble than he should also sometimes deliberately pick C to make it twice as likely as B, and Two should flip a coin.

(and as t converges towards 0, we do indeed get One always picking A)