From the abstract of the paper:
Nearly one third of experts expect this development to be ‘bad’ or ‘extremely bad’ for humanity.
Where do they get this claim from? From the table in section 3.5 of the paper, it looks like they must have looked at the average probability that the experts gave for HLAI being bad or extremely bad (31%), but summarizing that as "nearly one third of experts expect ..." makes no sense. That phrasing suggests that there is a particular subset of the researchers surveyed, consisting of almost a third of them, that believes that the outcome would be bad or extremely bad. But you could get an average probability of 31% even if all the experts gave approximately the same probability distribution, and then there would be no way to pick out which third of them expect a bad result.
I'm going to actually link to the paper, because it was actually non-trivially difficult for me to find, and because this page is now the top result for your suggested search query.
The most astonishing thing to me is what the paper gives as the responses to question 3, part B
"Assume for the purpose of this question that such HLMI will at some point exist. How likely do you then think it is that within (2 years / 30 years) thereafter there will be machine intelligence that greatly surpasses the performance of every human in most professions?"
EETN group, 30 years, median: 55%
What? They think that given a HLMI and 30 years, we have only a 55% chance make a SHLMI? Especially since they (on median) think it'll take 36 years to g...
You could compare with the existing poll results from http://lesswrong.com/lw/jj0/2013_survey_results/
Starting in 20 to 30 years the most important AGI precursor technology will be genetic engineering or some other technology for increasing human intelligence. Any long term estimate of our ability to create AGI has to take into account the strong possibility that the people writing the software and designing the hardware will be much, much smarter than currently exist, possibly 30 standard deviations above the human mean in intelligence.
I don't yet know how to update on this with respect to MIRI. One third of experts expect the development of human level AI to be ‘bad’. Well, I don't think I ever disagreed that the outcome could be bad. The problem is that risks associated with artificial intelligence are a very broad category. And MIRI's scenario of a paperclip maximizer is just one, in my opinion very unlikely, outcome (more below).
Some of the respondents also commented on the survey (see here). I basically agree with Bill Hibbard, who writes:
...Without an energetic political movement t
Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...
I don't think much of typical humans.
These kind of very extreme views are what I have a real problem with.
I see.
And just to substantiate "extreme views", here is Luke Muehlhauser:
It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.
That's not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies! That's the sort of security environment we operate in. Every botnet with millions of computers is a proof of concept.
The two quotes you gave say two pretty different things. What Yudkowsky said about the time-scale of self improvement being weeks or hours, is controversial. FWIW, I think he's probably right, but I wouldn't be shocked if it turned out otherwise.
What Luke said was about what happens when an already-superhuman AI gets an Internet connection. This should not be controversial at all. This is merely claiming that a "superhuman machine" is capable of doing something that regular humans already do on a fairly routine basis. The opposite claim - that the AI will not spread to everywhere on the Internet - requires us to believe that there will be a significant shift away from the status quo in computer security. Which is certainly possible, but believing the status quo will hold isn't an extreme view.
People might expect there to be lots of AIs quickly, but not each individual AI to grow quickly. Remember, the typical case is that parallelization sucks hard and you get sublinear scaling after a lot of work which often tops out under a relatively small number of computers. That's why everyone was so unhappy about single-core performance version of Moore's law breaking down: we don't want to program parallelly. On top of that, a lot of people have intuitions about diminishing returns & computational complexity which suggest that throwing more computing power at an AI helps ever less.
For most AGI architectures I've seen, the computationally expensive work is embarrassingly parallel. Programming solutions embarrassingly parallel problems is quite simple.
Is that generally accepted even just in the AGI community? That's another idea I usually see exclusively associated with Singulitarian communities. (As you say, it is controversial in general.)
I guess that depends on how "generally accepted" is to be interpreted. It is not as widely accepted as, say, plate tectonics is among geologists. It is certainly a view held among all OpenCog developers, including Goertzel. OpenCog itself is basically designed for recursive self-improvement. I also recall reading an interview with Hugo de Garis where he discussed a similar recursive self-improvement scenario. Hopefully someone can find a link. Talks on friendliness and hard-takeoff risk reduction are common at the AGI conferences. It's not a universal view however, as Pei Wang's NARS seems to be predicated on a One True Algorithm for general intelligence, which "obviously" wouldn't need improvement once found.
Perhaps my view is biased towards the communities I frequent, as my own work is on how to turn OpenCog/CogPrime into a recursively self-improving implementation. So the people I interact with already buy into the recursive self-improvement argument. It is a very straight forward argument however: if you assume that greater-than-human intelligence is possible, and that human-level intelligence is capable of building such a thing, then it is straight forward induction that a human-level artificial computer scientist could also build such a thing, and that either by applying improvements to itself or staging it could do so at an accelerating speed. To such an extent that an AGI researcher accepts the two premises (uncontroversial, I think, albeit not universal), I predict with high probability that they also believe some sort of takeoff scenario is possible. There's a reason there is significant overlap between the AGI and Singulitarian communities.
Where people differ greatly, I think, is in the limits of (software) self-improvement, the need for interaction with in the environment as part of the learning process, and as a result both the conditions and time-line for a hard-takeoff. Goertzel is working on OpenCog for the same reason that Yudkowsky is working FAI theory, however their own views on the hard-takeoff seem to be opposite sides of the spectrum. Yudkowsky seems to think that whatever limits exist in the efficiency of computational intelligence, it is at the very least many orders of magnitude beyond what we humans will design, and that such improvements can be made with little more than a webcam sensor or access to the internet and introspection -- something that will "FOOM" in a matter of days or less. Goertzel on the other hand sees intelligence as navigation of a very complex search space requiring massive amounts of computation, experimental interaction with the environment, and quite possibly some sort of physical embodiment, all things which rate limit advances to taking months or years and constant human interaction. I myself lay somewhere in-between, but more biased towards Goertzel's view.
Vincent Müller and Nick Bostrom have just released a paper surveying the results of a poll of experts about future progress in artificial intelligence. The authors have also put up a companion site where visitors can take the poll and see the raw data. I just checked the site and so far only one individual has submitted a response. This provides an opportunity for testing the views of LW members against those of experts. So if you are willing to complete the questionnaire, please do so before reading the paper. (I have abstained from providing a link to the pdf to create a trivial inconvenience for those who cannot resist temptaion. Once you take the poll, you can easily find the paper by conducting a Google search with the keywords: bostrom muller future progress artificial intelligence.)