philh comments on A few misconceptions surrounding Roko's basilisk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (125)
This paragraph is not an Eliezer Yudkowsky quote; it's Eliezer quoting Roko. (The "ve" should be a tip-off.)
If you kept going with your initial Eliezer quote, you'd have gotten to Eliezer himself saying he was worried a blackmail-type argument might work, though he didn't think Roko's original formulation worked:
"Again, I deleted that post not because I had decided that this thing probably presented a real hazard, but because I was afraid some unknown variant of it might, and because it seemed to me like the obvious General Procedure For Handling Things That Might Be Infohazards said you shouldn't post them to the Internet."
According to Eliezer, he had three separate reasons for the original ban: (1) he didn't want any additional people (beyond the one Roko cited) to obsess over the idea and get nightmares; (2) he was worried there might be some variant on Roko's argument that worked, and he wanted more formal assurances that this wasn't the case; and (3) he was just outraged at Roko. (Including outraged at him for doing something Roko thought would put people at risk of torture.)
There are lots of good reasons Eliezer shouldn't have banned R̶o̶k̶o̶ discussion of the basilisk, but I don't think this is one of them. If the basilisk was a real concern, that would imply that talking about it put people at risk of torture, so this is an obvious example of a topic you initially discuss in private channels and not on public websites. At the same time, if the basilisk wasn't risky to publicly discuss, then that also implies that it was a transparently bad argument and therefore not important to discuss. (Though it might be fine to discuss it for fun.)
Roko's original argument, though, could have been stated in one sentence: 'Utilitarianism implies you'll be willing to commit atrocities for the greater good; CEV is utilitarian; therefore CEV is immoral and dangerous.' At least, that's the version of the argument that has any bearing on the conclusion 'CEV has unacceptable moral consequences'. The other arguments are a distraction: 'utilitarianism means you'll accept arbitrarily atrocious tradeoffs' is a premise of Roko's argument rather than a conclusion, and 'CEV is utilitarian in the relevant sense' is likewise a premise. A more substantive discussion would have explicitly hashed out (a) whether SIAI/MIRI people wanted to construct a Roko-style utilitarian, and (b) whether this looks like one of those philosophical puzzles that needs to be solved by AI programmers vs. one that we can safely punt if we resolve other value learning problems.
I think we agree that's a useful debate topic, and we agree Eliezer's moderation action was dumb. However, I don't think we should reflexively publish 100% of the risky-looking information we think of so we can debate everything as publicly as possible. ('Publish everything risky' and 'ban others whenever they publish something risky' aren't the only two options.) Do we disagree about that?
IIRC, Eliezer didn't ban Roko, just discussion of the basilisk, and Roko deleted his account shortly afterwards.
Thanks, fixed!