You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Sysice comments on xkcd on the AI box experiment - Less Wrong Discussion

15 Post author: FiftyTwo 21 November 2014 08:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread.

Comment author: Sysice 21 November 2014 08:36:40AM *  19 points [-]

It might be useful to feature a page containing what we, you know, actually think about the basilisk idea. Although the rationalwiki page seems to be pretty solidly on top of google search, we might catch a couple people looking for the source.

If any XKCD readers are here: Welcome! I assume you've already googled what "Roko's Basilisk" is. For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.

Comment author: Punoxysm 21 November 2014 09:05:01PM 4 points [-]

Because of Eliezer's reaction, probably a hundred more people have heard of the Basilisk, and it tars LW's reputation.

And this wasn't particularly unforseeable - see Streisand Effect.

Part of rationality is about regarding one's actions as instrumental.

He mucked that one up. But to be fair to him, it's because he takes these ideas very seriously. I don't care about the basilisk because I don't take elaborate TDT-based reasoning too seriously, partially out of ironic detachment, but many here would say I should.

Comment author: JoshuaFox 22 November 2014 09:09:58PM -2 points [-]

because I don't take elaborate TDT-based reasoning too seriously, partially out of ironic detachment, but many here would say I should.

Righto, you should avoid not taking things seriously because of ironic detachment.

Comment author: JoshuaFox 22 November 2014 04:42:22PM 2 points [-]

That explanation by Eliezer cleared things up for me. He really should have explained himself earlier. I actually had some vague understanding of what Eliezer was doing with his deletion and refusal to discuss the topic, but as usual, Eliezer's explanation make things that I thought I sort-of-knew seem obvious in retrospect.

And as Eliezer realizes, the attempt to hush things up was a mistake. Roko's post should have been taken as a teaching moment.

Comment author: maxikov 21 November 2014 10:55:26PM 2 points [-]

Exactly. Having the official position buried in comments with long chains of references doesn't help to sound convincing compared to a well-formatted (even if misleading) article.

Comment author: XiXiDu 21 November 2014 11:08:07AM 6 points [-]

For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.

For a better idea of what's going on you should read all of his comments on the topic in chronological order.

Comment author: Azathoth123 21 November 2014 09:07:38AM 4 points [-]

It might be useful to feature a page containing what we, you know, actually think about the basilisk idea.

I'm guessing Eliezer has one of those, probably locked away behind a triply-locked vault in the basement of MIRI.

Comment author: Locaha 21 November 2014 08:48:58PM 3 points [-]

I'm guessing Eliezer has one of those, probably locked away behind a triply-locked vault in the basement of MIRI.

See, it's comments like these that are one of the reasons people think LW is a cult.

Does MIRI actually has a basement?

Comment author: Lumifer 22 November 2014 02:23:02AM 5 points [-]

Does MIRI actually has a basement?

It's behind the hidden door. Full of boxes which say "AI inside -- DO NOT TALK TO IT".

The ghosts there are not really dangerous. Usually.

Comment author: Yvain 22 November 2014 02:37:11AM *  26 points [-]

When I visited MIRI's headquarters, they were trying to set up a video link to the Future of Humanity Institute. Somebody had put up a monitor in a prominent place and there was a sticky note saying something like "Connects to FHI - do not touch".

Except that the H was kind of sloppy and bent upward so it looked like an A.

I was really careful not to touch that monitor.

Comment author: FiftyTwo 21 November 2014 10:47:25AM 2 points [-]

That response in /r/futurology is really good actually, I hadn't seen it before. Maybe it should be reposted (with the sarcasm slightly toned down) as a main article here?

Also kudos to Eleizer for admitting he messed up with the original deletion.