V_V comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
Müller and Bostrom's 2014 'Future progress in artificial intelligence: A survey of expert opinion' surveyed the 100 top-cited living authors in Microsoft Academic Search's "artificial intelligence" category, asking the question:
29 of the authors responded. Their median answer was a 10% probability of HLMI by 2024, a 50% probability of HLMI by 2050, and a 90% probability by 2070.
(This excludes how many said "never"; I can't find info on whether any of the authors gave that answer, but in pooled results that also include 141 people from surveys of a "Philosophy and Theory of AI" conference, an "Artificial General Intelligence" conference, an "Impacts and Risks of Artificial General Intelligence" conference, and members of the Greek Association for Artificial Intelligence, 1.2% of the people in the overall pool (2 / 170) said we'd "never" have a 10% chance of HLMI, 4.1% (7 / 170) said "never" for 50% probability, and 16.5% (28 / 170) said "never" for 90%.)
In Bostrom's Superintelligence (pp. 19-20), he cites the pooled results:
Luke has pretty much the same view as Bostrom. I don't know as much about Eliezer's views, but the last time I talked with him about this (in 2014), he didn't expect AGI to be here in 20 years. I think a pretty widely accepted view at MIRI and FHI is Luke's: "We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much."
Thanks!