Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
I want to expand on my last comment:
Is it clear that the discovery of intelligence by evolution had a larger impact than the discovery of eyes? What evidence do we have that increasing intelligence itself outweighs its cost compared to adding a new pair of sensors?
What I am asking is how we can be sure that it would be instrumental for an AGI to increase its intelligence rather than using its existing intelligence to pursue its terminal goal? Do we have good evidence that the resources that are necessary to increase intelligence outweigh the cost of being unable to use those resources to pursue its terminal goal directly?