You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaFox comments on xkcd on the AI box experiment - Less Wrong Discussion

15 Post author: FiftyTwo 21 November 2014 08:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jiro 21 November 2014 10:54:41PM *  4 points [-]

It's hard to polish a turd. And I think all the people who have responded by saying that Eliezer's PR needs to be better are suggesting that he polish a turd. The basilisk and the way the basilisk was treated has implications about LW that are inherently negative, to the point where no amount of PR can fix it. The only way to fix it is for LW to treat the Basilisk differently.

I think that if Eliezer were to

  1. Allow free discussion of the basilisk and
  2. Deny that the basilisk or anything like it could actually put one in danger from advanced future intelligences,

people would stop seeing the basilisk as reflecting badly on LW. It might take some time to fade, but it would eventually go away. But Eliezer can't do that, because he does think that basilisk-like ideas can be dangerous, and this belief of his is feeding his inability to really deny the Basilisk.

Comment author: JoshuaFox 22 November 2014 09:12:49PM 3 points [-]

And (3) explain why other potential info hazards, not the basilisk but very different configurations of acausal negotation (that have either not yet discovered, or were discovered but they not made public), should not be discussed.