Elon Musk submitted a comment to edge.org a day or so ago, on this article. It was later removed.
The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.
I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...
Now Elon has been making noises about AI safety lately in general, including for example mentioning Bostrom's Superintelligence on twitter. But this is the first time that I know of that he's come up with his own predictions of the timeframes involved, and I think his are rather quite soon compared to most.
The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.
We can compare this to MIRI's post in May this year, When Will AI Be Created, which illustrates that it seems reasonable to think of AI as being further away, but also that there is a lot of uncertainty on the issue.
Of course, "something seriously dangerous" might not refer to full blown superintelligent uFAI - there's plenty of space for disasters of magnitude in between the range of the 2010 flash crash and clippy turning the universe into paperclips to occur.
In any case, it's true that Musk has more "direct exposure" to those on the frontier of AGI research than your average person, and it's also true that he has an audience, so I think there is some interest to be found in his comments here.
For sure. But fully autonomous agents are a goal a lot of people will surely be working towards, no? I don't think anyone is claiming "every AI project is dangerous". They are claiming something more like "AI with the ability to do pretty much all the things human minds do is dangerous", with the background presumption that as AI advances it becomes more and more likely that someone will produce an AI with all those abilities.
Again: for sure, but that isn't the point at issue.
One exciting but potentially scary scenario involving AI is this: we make AI systems that are better than us at making AI systems, let them get to work designing their successors, let them design their successors, etc. End result (hopefully): a dramatically better AI than we could hope to make on our own. Another, closely related: we make AI systems that have the ability to reconfigure themselves by improving their own software and maybe even adjusting their hardware.
In any of these cases, you may be confident that the AI you initially built doesn't want to get out of whatever box you put it in. But how sure are you that after 20 iterations of self-modification, or of replacing an AI by the successor it designed, you still have something that doesn't want to get out of the box?
There are ways to avoid having to worry about that. We can just make AI systems that neither self-modify nor design new AI systems, for instance. But if we are ever to make AIs smarter than us, the temptation to use that smartness to make better AIs will be very strong, and it only requires one team to try it to expose us to any risks that might ensue.
(One further observation: telling people they're stupid and you're laughing at them is not usually effective in making them take your arguments more seriously. To some observers it may suggest that you are aware there's a weakness in your own arguments. ("Argument weak; shout louder."))