You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ShardPhoenix comments on [LINK] Elon Musk interested in AI safety - Less Wrong Discussion

15 [deleted] 18 June 2014 10:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: ShardPhoenix 19 June 2014 12:39:08AM 5 points [-]

While that particular scenario may not be likely, I'm increasingly inclined to think that people being scared by Terminator is a good thing from an existential risk perspective. After all, Musk's interest here could easily lead to him supporting MIRI or something else more productive.

Comment author: Jan_Rzymkowski 19 June 2014 12:53:59AM 0 points [-]

It's not only unlikely - what's much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.

This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn't smart, it's much harder to find good reasons why we should solve AI friendliness, when there are lots of distracting strawmans.

It was me from half a year ago. I used to think, that anybody, who fears AI may bring harm, is a loony. All the reasons I heard from people were that AI wouldn't know emotions, AI would try to harmfully save people from themselves, AI would want to take over the world, AI would be infected by virus or hacked or that AI would be just outright evil. I can easily debunk all of above. And then I read about Paperclip Maximizer and radically changed my mind. I might got to that point much sooner, if not for all the strawman distractions.

Comment author: Punoxysm 19 June 2014 01:10:05AM 7 points [-]

I think you are looking into it too deep. Skynet as an example of AI risk is fine, if cartoonish.

Of course, we are very far away from strong AIs and therefore from existential AI risk.