You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on [LINK] Another "LessWrongers are crazy" article - this time on Slate - Less Wrong Discussion

9 Post author: CronoDAS 18 July 2014 04:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: MathiasZaman 18 July 2014 11:11:02AM 4 points [-]

This really doesn't deserve all that much attention (It's blatant fear-mongering. If you're going to write about the Basilisk, you should also explain Pascal's mugging as a basic courtesy.), but there's one thing that this article makes me wonder:

I occasionally see people saying that working on Friendly AI is a waste of time. Yet at the same time it seems very hard to ignore the importance of existential risk prevention. I haven't seen a lot of good arguments for why an AGI wouldn't be potentially dangerous. So why wouldn't we want some people working on FAI? There's a lot of existential risk, not everyone can work on the same one.

I also disagree on the comparison to Roko's Basilisk and Newcomb's problem. With a thought-experiment, you need to make some assumptions, such as the scenario being true. It's meaningless to talk about Newcomb's if you don't assume Omega exists (within the context of the thought-experiment). Roko's Basilisk, on the other hand, is about how we should act in real life. This changes a lot of variables. If we propose a thought-experiment in which the Basilisk actually exists, the comparison would fly.

Comment author: Luke_A_Somers 18 July 2014 06:30:06PM 3 points [-]

Yes, the Basilisk does address how one should act in real life. It says: 'Don't build a basilisk, dummy!'. Problem solved.