Luke_A_Somers comments on [LINK] Another "LessWrongers are crazy" article - this time on Slate - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (129)
This really doesn't deserve all that much attention (It's blatant fear-mongering. If you're going to write about the Basilisk, you should also explain Pascal's mugging as a basic courtesy.), but there's one thing that this article makes me wonder:
I occasionally see people saying that working on Friendly AI is a waste of time. Yet at the same time it seems very hard to ignore the importance of existential risk prevention. I haven't seen a lot of good arguments for why an AGI wouldn't be potentially dangerous. So why wouldn't we want some people working on FAI? There's a lot of existential risk, not everyone can work on the same one.
I also disagree on the comparison to Roko's Basilisk and Newcomb's problem. With a thought-experiment, you need to make some assumptions, such as the scenario being true. It's meaningless to talk about Newcomb's if you don't assume Omega exists (within the context of the thought-experiment). Roko's Basilisk, on the other hand, is about how we should act in real life. This changes a lot of variables. If we propose a thought-experiment in which the Basilisk actually exists, the comparison would fly.
Yes, the Basilisk does address how one should act in real life. It says: 'Don't build a basilisk, dummy!'. Problem solved.