As some readers may recall, we had a conference this January about intelligence, and in particular the future of machine intelligence. We did a quick survey among participants about their estimates of when and how human-level machine intelligence would be developed. Now we can announce the results: Sandberg, A. and Bostrom, N. (2011): Machine Intelligence Survey, Technical Report #2011-1, Future of Humanity Institute, Oxford University.

[...]

The median estimate of when there will be 50% chance of human level machine intelligence was 2050.

People estimated 10% chance of AI in 2028, and 90% chance in 2150.

[...]

All in all, a small study of a self selected group, so it doesn't prove anything in particular. But it fits in with earlier studies like Ben Goertzel, Seth Baum, Ted Goertzel, How Long Till Human-Level AI? and Bruce Klein, When will AI surpass human-level intelligence? - people who tend to answer this kind of surveys seem to have a fairly similar mental model.

Link: Machine Intelligence Survey (PDF)

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 3:30 PM

a self selected group

I would emphasize this. Also a survey of contributors to a converging technologies book by Bainbridge gave a median estimate of 2085 (see page 344).

What would a survey of a cross-section of "computer experts" have looked like predicting the Internet in 2005 from 1990? The level of awareness required to make that prediction accurately is not generally found, people who did understand it well enough to make an educated guess would be modeled as outliers. The above survey is asking people to make a similar type of prediction.

An important aspect of AI predictions like the above is that it is asking people who do not understand how AI works. They are definitely experts on the history of past attempts but that does not imply the domain knowledge required to predict human-level AI. It is a bit like asking the Montgolfier brothers to predict when man would land on the moon -- experts on what has been done but not on what is required.

There are many reasoned extrapolations of technology arrival dates based on discernible trends -- think Moore's Law -- but something comparable in AI does not exist. The vast majority of AI people have no basis on which to assert the problem, something they generally can't define, will be solved next week or next century. The few that might know something will be buried in the noise floor. Consequently, I do not find much values in these group predictions.

Zeitgeist is not predictive except perhaps in a meta way.

Thanks. I added this to section 2.8 of the Singularity FAQ.

The survey gives a 33% chance of an "extremely bad outcome" from the development of machine intelligence.

Another of their surveys gave a 5% chance of human extinction at the hands of superintelligence by 2100.

These figures may need to be "adjusted" - on the grounds that the FHI seems to be a bit of a doom-mongering organisation for whom the end of the world is a fund-raising and marketing tool, and the surveys sample from their friends and associates.