Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
So this is a valid point that betrays a possible unjustified leap in logic on my part. I think the thought process (although honestly I haven't thought about it that much) is something to the effect that any sufficiently powerful optimizer such that it can self-optimize for a substantial take-off is going to have to be able to predict and interact well enough with its environment that it will need to effectively solve the natural language problem and talk to humans (we are after all a major part of its environment until/unless it decides that we are redundant). But the justification for this is to some extent just weak intuition and the known sample of mind-space is very small, so intuitions informed by such experience should be suspect.
(nods) Yeah, agreed.
I would take it further, though. Given that radically different kinds of minds are possible, the odds that the optimal architecture for supporting self-optimization for a given degree of intelligence happens to be something approximately human seem pretty low.