http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/
Michael Nielsen, a pioneer in the field of quantum computation (from his website: Together with Ike Chuang of MIT, he wrote the standard text on quantum computation. This is the most highly cited physics publication of the last 25 years, and one of the ten most highly cited physics books of all time (Source: Google Scholar, December 2007). He is the author of more than fifty scientific papers, including invited contributions to Nature and Scientific American) has a pretty good essay about the probability of the Singularity. He starts off from Vinge's definition of the Singularity, and says that it's essentially the proposition that the three following assumptions are true:
A: We will build computers of at least human intelligence at some time in the future, let’s say within 100 years.
B: Those computers will be able to rapidly and repeatedly increase their own intelligence, quickly resulting in computers that are far more intelligent than human beings.
C: This will cause an enormous transformation of the world, so much so that it will become utterly unrecognizable, a phase Vinge terms the “post-human era”. This event is the Singularity.
Then he goes on to define the probability of the Singularity within the next 100 years as the probability p(C|B)p(B|A)p(A), and gives what he thinks are reasonable ranges for the values p(A), p(B) and p(C)
I’m not going to argue for specific values for these probabilities. Instead, I’ll argue for ranges of probabilities that I believe a person might reasonably assert for each probability on the right-hand side. I’ll consider both a hypothetical skeptic, who is pessimistic about the possibility of the Singularity, and also a hypothetical enthusiast for the Singularity. In both cases I’ll assume the person is reasonable, i.e., a person who is willing to acknowledge limits to our present-day understanding of the human brain and computer intelligence, and who is therefore not overconfident in their own predictions. By combining these ranges, we’ll get a range of probabilities that a reasonable person might assert for the probability of the Singularity..
In the end, he finds that the Singularity should be considered a serious probability:
If we put all those ranges together, we get a “reasonable” probability for the Singularity somewhere in the range of 0.2 percent – one in 500 – up to just over 70 perecent. I regard both those as extreme positions, indicating a very strong commitment to the positions espoused. For more moderate probability ranges, I’d use (say) 0.2 < p(A) < 0.8, 0.2 < p(b) < 0.8, and 0.3 < p(c) < 0.8. So I believe a moderate person would estimate a probability roughly in the range of 1 to 50 percent.
These are interesting probability ranges. In particular, the 0.2 percent lower bound is striking. At that level, it's true that the Singularity is pretty darned unlikely. But it's still edging into the realm of a serious possibility. And to get this kind of probability estimate requires a person to hold quite an extreme set of positions, a range of positions that, in my opinion, while reasonable, requires considerable effort to defend. A less extreme person would end up with a probability estimate of a few percent or more. Given the remarkable nature of the Singularity, that's quite high. In my opinion, the main reason the Singularity has attracted some people's scorn and derision is superficial: it seems at first glance like an outlandish, science-fictional proposition. The end of the human era! It's hard to imgaine, and easy to laugh at. But any thoughtful analysis either requires one to consider the Singularity as a serious possibility, or demands a deep and carefully argued insight into why it won't happen.
Hat tip to Risto Saarelma.
Nielson characterizes the Singularity as:
Assuming we avoid a collapse of civilization, I would estimate p(A) = 0.7. B requires some clarification. I will read "far more" (intelligent than humans) as "by a factor of 1000". Then, if "quickly" is read as "within 5 years", I would estimate p(B|A) = 0.2, and if "quickly" is read as within 30 years, I would up that estimate to p(B|A) = 0.8. That is, I expect a rather slow takeoff.
But my main disagreement with most singularitarians is in my estimate of P(C|B). I estimate it at less than 0.1 - even allowing two human generations (50 years) for the transformation. I just don't think that the impact of superhuman intelligence will be all that dramatic.
Let us look at some other superhuman (by a factor of 1000 or more) technologies that we already have. Each of them has transformed things, to be sure, but none of them has rapidly made things "utterly unrecognizable".
Transformative technologies - yes. Utterly unrecognizable - no. And, collectively, the existing 1000x improvements listed above are likely to prove at least as transformative as the prospective 1000x in intelligence.
ETA: In effect, I am saying that most of the things that can be done by a 1000x-human AI could also be done by the collective effort of a thousand or so 1x-humans. And that the few things that can not be done by that kind of collective effort are not going to be all that transformative.
Keep in mind that all those developments have been produced by human level intelligence. Human level intelligence has made the world pretty unrecognizable compared to pre-human level intelligence.