Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
I find it impossible to predict without knowing specifics about the future scenario. As we get closer to creating an AI, we are almost guaranteed to find out more difficulties associated with it.
Maybe in 10 years we will find some unforeseen problem that we have no idea how to resolve, in which case of course my probability estimate would significantly drop.
Or, if we have not seen any significant progress in the field, I predict my estimate would remain constant for the first 30 years, then decrease every year progress is not being made.
If there is a continuous stream of progress that doesn't also reveal huge new barriers, then I don't believe it would ever go down. But, I find it hard to imagine any scenario that presents continual progress, doesn't show any major roadblocks, yet still has not managed to develop AI more than 200 years form now.