I wonder what would have been Musk's reaction had he witnessed Eurisko winning the United States Traveller TCS national championship in 1981 and 1982. Or if he had witnessed Schmidhuber's universal search algorithm solving Towers of Hanoi on a desktop computer in 2005.
This is not Musk's field of expertise. I do not give his words special weight.
The fact that he can sit in on some cutting edge tech demos, or even chat with CEOs, still doesn't make him an expert.
I have a technical background in AI; there's still massive hurdles to overcome; not 5-10 year hurdles. Nothing from Deepmind will "escape onto the internet" any time soon. It is very much grounded in the "Narrow AI" technologies like machine learning.
I feel pretty confident calling him a Cassandra.
I feel pretty confident calling him a Cassandra.
I agree with the rest of your comment, but calling him a "Cassandra" means "He's right, but no-one will believe him," and I hope that isn't what you meant!
An applicable morality tale here would be the boy that cried wolf, if he hadn't retracted his post. I don't remember if he had a name. (Elon Musk: Inverse Cassandra.)
How do you know the comment was actually from Musk? My guess is that some crank on the internet sent Edge an email claiming to be from Musk, and they published it without doing an identity check. Then the real Musk found out and asked for it to be taken down. The sentence "10 years at most" seems especially stupidly overconfident and inarticulate (and thus unlike something Musk would write).
The exposure of the general public to the concept of AI risk probably increased exponentially a few days ago, when Stephen Colbert mentioned Musk's warnings and satirized them. (Unrelatedly but also of potential interest to some LWers, Terry Tao was the guest of the evening).
So what is actually going on at Deepmind right now? Should I be updating on this - is there new data in his estimate (i.e. something going on at deepmind that is more worrying than what we know from other sources)?
Kudos to you or whoever saved that comment into an image before it was deleted.
Did you see it on the site, though, or did you only see the image? Because I could easily photoshop such an image and claim it is a legit comment that just happened to be deleted...
Why would Elon Musk have direct exposure to Deepmind?
EDIT:
Ok, he is an investor. I had missed that.
The mainstream press has now picked up on Musk's recent statement. See e.g. this Daily Mail article: 'Elon Musk claims robots could kill us all in FIVE YEARS in his latest internet post…'
I suspect that the marginal value of a dollar to Elon Musk is close to zero, which makes it difficult to test his sincerity in his beliefs by offering a bet.
I would structure it like this: I give him $100 right now, and if there's no AGI in 10 years, he gives me a squillion dollars, or some similarly large amount that reflects his confidence in his prediction. This way, he cannot claim that a fooming AI that renders dollars worthless will deny him the benefit of a win, because he gets to enjoy my $100 right now.
Elon is unlikely to accept this wager; would anyone like to accept it in his place?
Note that Stuart Russell has now submitted a comment. It begins with this quote from Leo Szilard:
We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief.
it is growing at a pace close to exponential.
I wonder how he (or anybody else) measures growth of knowledge. Are there any sensible metrics beside amount of paper created? I understand that published pages is a measure as is number of patents but I don't think these are useful proxies for knowledge.
What other measures might be used?
Complexity measures of the created knowledge: Depth of the gratph of citations between papers (assuming each citation adds something; might be weithed by the number of outgoing refs)
Complexity of the created artifacts (p
We believe we can achieve trans-sapient performance by 2018, he is not that off the mark. But dangers as such, those are highly over-blown, exaggerated, pseudo-scientific fears, as always.
Elon Musk submitted a comment to edge.org a day or so ago, on this article. It was later removed.
Now Elon has been making noises about AI safety lately in general, including for example mentioning Bostrom's Superintelligence on twitter. But this is the first time that I know of that he's come up with his own predictions of the timeframes involved, and I think his are rather quite soon compared to most.
We can compare this to MIRI's post in May this year, When Will AI Be Created, which illustrates that it seems reasonable to think of AI as being further away, but also that there is a lot of uncertainty on the issue.
Of course, "something seriously dangerous" might not refer to full blown superintelligent uFAI - there's plenty of space for disasters of magnitude in between the range of the 2010 flash crash and clippy turning the universe into paperclips to occur.
In any case, it's true that Musk has more "direct exposure" to those on the frontier of AGI research than your average person, and it's also true that he has an audience, so I think there is some interest to be found in his comments here.