moridinamael comments on Welcome to LessWrong (January 2016) - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (15)
Hi,
I've read some of "Rationality: From AI to Zombies", and find myself worrying about unfriendly strong AI.
Reddit recently had an AMA with the OpenAI team, where "thegdb" seems to misunderstand the concerns. Another user, "AnvaMiba" provides 2 links (http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better and http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/) as examples of researchers not worried about unfriendly strong AI.
The arguments presented in the links above are really poor. However, I feel like I am attacking a straw man - quite possibly, www.popsci.com is misrepresenting a more reasonable argument.
Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to "steel man" the argument.
You might want to start with Bostrom's Superintelligence: Paths, Dangers, Strategies.