Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
There seem to be two separate questions, when we will have artificial intelligence that approximates human intelligence, and when an AI takeoff will occur. If we get the first and the second doesn't happen shortly thereafter then we should strongly reduce our estimates for the second happening at all. But the second cannot happen until the first has happened. Moreover, if the two are extremely tied up (that is they will likely occur very close to each other), then the only reason we haven't observed such an event already might be that such an event is likely to wipe out the species that triggers it, and there's just survivorship bias. (This seems to be the sort of anthropic reasoning that just makes me feel very confused so I'm not sure this is a reasonable observation.)
I would say that if we don't have human-like AI in the next fifty years, and there's no obvious temporary barrier preventing technological improvement (i.e. global collapse of civilization or at least enough bad stuff to prevent almost any research) then I'd start seriously thinking that people like Penrose have a point. Note that doesn't mean that there's anything like a soul (Penrose's ideas for example suggest that an appropriately designed quantum computer could mimic a conscious entity), although that idea might also need to be on the table. I don't consider any of those sorts of hypotheses at all likely right now, but I'd say 50 years is about where the good rationalist should recognize their confusion.
Do you mean to say that only something that approximates human intelligence can initiate an "AI takeoff"? If so, can you summarize your reasons for believing that?