My draft attempt at a comment. Please suggest edits before I submit it.:
The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don't believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone else, could compute math equations perfectly in an instant, etc. No one on this planet could compete with you and with a little time no one could stop you (and that is just a crude brain simulation).
Here are two websites that go into much greater detail about the problem:
AI Risk & Friendly AI Research: http://singularity.org/research/ http://singularity.org/what-we-do/
Facing the Singulatiry: http://facingthesingularity.com/2012/ai-the-problem-with-solutions/
The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret).
In a word, IARPA. In a sentence:
The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk/high-payoff research programs that have the potential to provide our nation with an overwhelming intelligence advantage over future adversaries.
They are large and well-funded.
http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/
Author: Huw Price (Bertrand Russell Professor of Philosophy at Cambridge)
The article is mainly about the Centre for the Study of Existential Risk and the author's speculation about AI (and his association with Jaan Tallinn). Nothing made me really stand up and think "This is something I've never heard on Less Wrong", but it is interesting to see Existential risk and AI getting more mainstream attention, and the author reproduces tabooing in his willful avoidance of attempting to define the term "intelligence".
The comments all miss the point or reproduce cached thoughts with frustrating predictability. I think I find them to be so frustrating because these do not seem to be unintelligent people (by the standards of the internet at least; their comments have good grammar and vocabulary), but they are not really processing.