From the abstract of the paper:
Nearly one third of experts expect this development to be ‘bad’ or ‘extremely bad’ for humanity.
Where do they get this claim from? From the table in section 3.5 of the paper, it looks like they must have looked at the average probability that the experts gave for HLAI being bad or extremely bad (31%), but summarizing that as "nearly one third of experts expect ..." makes no sense. That phrasing suggests that there is a particular subset of the researchers surveyed, consisting of almost a third of them, that believes that the outcome would be bad or extremely bad. But you could get an average probability of 31% even if all the experts gave approximately the same probability distribution, and then there would be no way to pick out which third of them expect a bad result.
I'm going to actually link to the paper, because it was actually non-trivially difficult for me to find, and because this page is now the top result for your suggested search query.
The most astonishing thing to me is what the paper gives as the responses to question 3, part B
"Assume for the purpose of this question that such HLMI will at some point exist. How likely do you then think it is that within (2 years / 30 years) thereafter there will be machine intelligence that greatly surpasses the performance of every human in most professions?"
EETN group, 30 years, median: 55%
What? They think that given a HLMI and 30 years, we have only a 55% chance make a SHLMI? Especially since they (on median) think it'll take 36 years to g...
You could compare with the existing poll results from http://lesswrong.com/lw/jj0/2013_survey_results/
Starting in 20 to 30 years the most important AGI precursor technology will be genetic engineering or some other technology for increasing human intelligence. Any long term estimate of our ability to create AGI has to take into account the strong possibility that the people writing the software and designing the hardware will be much, much smarter than currently exist, possibly 30 standard deviations above the human mean in intelligence.
I don't yet know how to update on this with respect to MIRI. One third of experts expect the development of human level AI to be ‘bad’. Well, I don't think I ever disagreed that the outcome could be bad. The problem is that risks associated with artificial intelligence are a very broad category. And MIRI's scenario of a paperclip maximizer is just one, in my opinion very unlikely, outcome (more below).
Some of the respondents also commented on the survey (see here). I basically agree with Bill Hibbard, who writes:
...Without an energetic political movement t
Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...
I don't think much of typical humans.
These kind of very extreme views are what I have a real problem with.
I see.
And just to substantiate "extreme views", here is Luke Muehlhauser:
It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.
That's not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies! That's the sort of security environment we operate in. Every botnet with millions of computers is a proof of concept.
The two quotes you gave say two pretty different things. What Yudkowsky said about the time-scale of self improvement being weeks or hours, is controversial. FWIW, I think he's probably right, but I wouldn't be shocked if it turned out otherwise.
What Luke said was about what happens when an already-superhuman AI gets an Internet connection. This should not be controversial at all. This is merely claiming that a "superhuman machine" is capable of doing something that regular humans already do on a fairly routine basis. The opposite claim - that the AI will not spread to everywhere on the Internet - requires us to believe that there will be a significant shift away from the status quo in computer security. Which is certainly possible, but believing the status quo will hold isn't an extreme view.
If I understand correctly, the inheritability of a trait often increases with a decrease of environmental variability.
Yes. (More relevantly, I'd say that as the environment gets better, the heritability will increase.)
Overall, your points about the Ethiopian cows are correct but I don't think they would account for more than a relatively small chunk of the difference between the best American milk cows and regular Ethiopian milk cows. It really does look to me like humanity has pushed milk capacity dozens of standard deviations past where it would have been even centuries ago.
They found that a non-linear model predicts the data better than a linear model, which is however quite good, and again I don't find this particularly surprising since linear approximations often perform well on sufficiently smooth functions, especially in the neighbourhood of a stationary point (where you can expect the genotypes of a relatively stable population to be, approximately).
Not surprising no, but people have seriously argued to me that things like embryo selection will not work well or at all because it's possible important stuff will be due to nonlinear genetic interactions (most recently on Google+, but I've seen it elsewhere). So it's something that apparently needs to be established.
My problem with Hsu line of argument is that he extrapolates predictions of these kinds of linear models way past observed phenotypes, which is something that has no theoretical basis, especially given that non-linear effects
I'm not sure how seriously Hsu takes the 30SD part as translating to underlying intelligence; the issue of SDs/normal ordinal distribution of intelligence in the population vs a hypothetical underlying cardinal scale of intelligence (http://lesswrong.com/lw/kcs/what_resources_have_increasing_marginal_utility/b0qb) is not really easy to come down to a hard conclusion except to note that in some areas AI progress curves spend a while in the human range but often go steadily beyond (eg computer chess), which suggests to me that large difference in human intelligence rankings do translate to fairly meaningful (albeit not huge) absolute intelligence differences, in which case the 30SDs might translate to a lot of real intelligence and not some trivial-but-statistically-measurable improvements in how fast they can do crosswords or something.
I think probably the best response here is to take it as saying that the lower limit will be extremely high and equivalent to the top observed phenotype, like a von Neumann. Since right now estimates of IVF sperm donor usage in the USA suggest something like 30-60k kids a year are born that way*, if the fertility doctors dropped in an iterated embryo selection procedure before implantation. I think 30-60k geniuses would make a major difference to society**, and if they happened to be even smarter than the previous top observed phenotypes...?
* I use this figure because looking into the matter, I don't think many women who could bear kids normally would willing sign up for IVF just to get the benefits of embryo selection. It's much too painful, inconvenient, and signals the wrong values. But women who have to do IVF if they ever want to have a kid would be much more likely to make use of it.
** to put 30-60k in perspective, the USA has around 4m babies a year, so ignoring demographics, the top 1% (roughly MENSA level, below-average for LW, well below average for cutting-edge research) of babies represents 40k. If all the IVFers used embryo selection and it boosted the IVF babies to an average of just 130, well below genius, it'd practically single-handedly double the 1%ers.
Vincent Müller and Nick Bostrom have just released a paper surveying the results of a poll of experts about future progress in artificial intelligence. The authors have also put up a companion site where visitors can take the poll and see the raw data. I just checked the site and so far only one individual has submitted a response. This provides an opportunity for testing the views of LW members against those of experts. So if you are willing to complete the questionnaire, please do so before reading the paper. (I have abstained from providing a link to the pdf to create a trivial inconvenience for those who cannot resist temptaion. Once you take the poll, you can easily find the paper by conducting a Google search with the keywords: bostrom muller future progress artificial intelligence.)