Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
A minor quibble:
Never is a very strong word and it isn't obvious that evolution will actually find things that intelligence would not. The general scale that evolution gets to work at is much longer term than intelligence has so far. If intelligence has as much time to fiddle it might be able to do everything evolution can (indeed, intelligence can even co-opt evolution by means of genetic algorithms). But, this doesn't impact your main point in so far as if intelligent were to need those sorts of time scales then one obviously wouldn't have an intelligence explosion.
I want to expand on my last comment:
Is it clear that the discovery of intelligence by evolution had a larger impact than the discovery of eyes? What evidence do we have that increasing intelligence itself outweighs its cost compared to adding a new pair of sensors?
What I am asking is how we can be sure that it would be instrumental for an AGI to increase its intelligence rather than using its existing intelligence to pursue its terminal goal? Do we have good evidence that the resources that are necessary to increase intelligence outweigh the cost of being unable to use those resources to pursue its terminal goal directly?