to hear that 10% - of fairly general populations which aren't selected for Singulitarian or even transhumanist views - would endorse a takeoff as fast as 'within 2 years' is pretty surprising to me.
Really? It seemed really surprising to me that number was not higher. People are used to technology doubling in less than 2 years, and it is intuitively very straightforward that if you have a human-level AI running on 1,000 computers, than you could have a 1,000 * human-level AI running on 1,000,000 computers (not really because scaling relationships here might not be linear, but the linear assumption is a common intuition I expect most people to share), and two years is more than enough to build a bigger datacenter.
There are two aspects of the Scary Idea which are controversial, and which I don't think this question covered:
First, that an AI could inspect its own source code and take over the job of improving itself, thereby turning e^n improvement into e^(e^n) (something which has never happened before). This is generally accepted in the AGI community, but otherwise a foreign, non-intuitive idea.
Second, that an AI could go from human-level to radically superhuman within days, hours, minutes, or even seconds. Few if any outside of MIRI believe this (and I can't get a straight answer as to whether they believe it either. If not, That Alien Message should be retracted.)
People are used to technology doubling in less than 2 years, and it is intuitively very straightforward that if you have a human-level AI running on 1,000 computers, than you could have a 1,000 * human-level AI running on 1,000,000 computers (not really because scaling relationships here might not be linear, but the linear assumption is a common intuition I expect most people to share), and two years is more than enough to build a bigger datacenter.
People might expect there to be lots of AIs quickly, but not each individual AI to grow quickly. Remember,...
Vincent Müller and Nick Bostrom have just released a paper surveying the results of a poll of experts about future progress in artificial intelligence. The authors have also put up a companion site where visitors can take the poll and see the raw data. I just checked the site and so far only one individual has submitted a response. This provides an opportunity for testing the views of LW members against those of experts. So if you are willing to complete the questionnaire, please do so before reading the paper. (I have abstained from providing a link to the pdf to create a trivial inconvenience for those who cannot resist temptaion. Once you take the poll, you can easily find the paper by conducting a Google search with the keywords: bostrom muller future progress artificial intelligence.)