You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

g_pepper comments on Welcome to LessWrong (January 2016) - Less Wrong Discussion

7 Post author: Clarity 13 January 2016 09:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: SoerenE 15 January 2016 08:38:12PM 3 points [-]

Hi,

I've read some of "Rationality: From AI to Zombies", and find myself worrying about unfriendly strong AI.

Reddit recently had an AMA with the OpenAI team, where "thegdb" seems to misunderstand the concerns. Another user, "AnvaMiba" provides 2 links (http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better and http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/) as examples of researchers not worried about unfriendly strong AI.

The arguments presented in the links above are really poor. However, I feel like I am attacking a straw man - quite possibly, www.popsci.com is misrepresenting a more reasonable argument.

Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to "steel man" the argument.

Comment author: g_pepper 15 January 2016 11:21:22PM 2 points [-]

Stuart Armstrong asked a similar question a while back. You may find the comments to his post useful.

Comment author: SoerenE 18 January 2016 08:05:32AM 1 point [-]

Thank you. That was exactly what I was after.