A short article, quoting me and Luke:

http://www.wired.co.uk/news/archive/2012-05/17/the-dangers-of-an-ai-smarter-than-us

It makes the point that it's not the shambling robots that are the risks here, but the other powers of intelligence.

New Comment
8 comments, sorted by Click to highlight new comments since:
[-][anonymous]220

Wow, that's probably the best press coverage that SI has had thus far. The quotes:writing ratio is very high, and the assertions you and Luke gave are presented without argument. And to top it off, SI is portrayed as a well-intentioned non-profit struggling to raise awareness of an important issue. (So is FHI, but AFAIK FHI has never had SI's public relations problems.)

The title is a bit misleading given the content of the article, which quickly moves away from "outsourcing" to talk about existential risk scenarios. But this might actually be in SI's favor, since it preemptively shuts down the "Terminator!" reflex.

[-]Shmi70

And most of the quotes are reasonably accessible, I have only spotted a couple where the inferential distance might be an issue: "utility maximising, as it's hard to code for reduced impact, and if it doesn't use all the resources then someone else can" (I couldn't figure out what is meant here, so an average Wired reader probably can't, either) and "the number of jumps from village idiot to Einstein might not be as many as we think" (not obvious without reading the relevant LW post). Overall, a really good job popularizing the SI/FHI views.

for the record, the other four are: pandemics, synthetic biology, nanotechnology and nuclear war

When Stuart Armstrong says "synthetic biology", what is he talking about? Genetically engineered biological weapons? Mutants created by genetic engineering?

Even more than an answer, I'd like a link to some sort of discussion, whether it's a paper or an interview, of the details of his view.

Note that my name is misspelled and I was misquoted in the article, and I contacted the author about this.

Update: the article has been fixed.

Knowing the way that humans are notoriously bad at planning beyond the short term, Armstrong feels that given the risk "it would perhaps be best not to create AI at all," since in the end our only hope of competing with AI might be the long shot of being able to upload our brains and turn ourselves into digital beings.

Not creating AI at all doesn't seem to be a viable option to me.

Nope. It's vulnerable to a single cheater.

The supplied reference for "Some members of the AI community put the chance - or risk - as high as 50 percent." appears to be unsupportive of the claim.