Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
I approach the question this way: consider the set S of algorithms capable of creating intelligent systems.
Thus far, the only member of S we know about is natural selection... call that S1.
There are several possibilities:
Given 1 or 2, recursive self-improvement isn't gonna happen.
Given 3: now consider a superhuman AI created by humans. Is it a member of S?
Again, three possibilities: not in S, S3 > S2, or S3 <= S2.
I can't see why a human-created superhuman AI would necessarily be incapable of doing any particular thing that human intelligences can do, so (S3 > S2) seems pretty likely given (S2 > S1).
Lather, rinse, repeat: each generation is smarter than the generation before.
So it seems to me that, given superhuman AI, self-optimizing AI is pretty likely. But I don't know how likely superhuman AI -- or even AI at all -- is. We may just not be smart enough to build intelligent systems.
I wouldn't count on it, though. We're pretty clever monkeys.
As for "explosive"... well, that's just asking how long a generation takes. And, geez, I dunno. How long does it take to develop a novel algorithm for producing intelligence? Maybe centuries, in which case the bootstrapping process will take millenia. Maybe minutes, in which case we get something orders of magnitude smarter than us by lunchtime.
Of course, at some point returns presumably diminish... that is, there's a point where each more-intelligent generation takes too long to generate. But it would be remarkable if humans happened to be anywhere near the top of that slope today.
An argument that is often mentioned is the relatively small difference between chimpanzees and humans. But that huge effect, increase in intelligence, rather seems like an outlier and not the rule. Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents.
Is it reasonable to assume that such quantum leaps are the rule, based on a single case study?