Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
Check with definition 2.4. In the technical sense used in the document, a predictor is not defined as being something that outputs the sequence - it is defined as something that eventually learns how to predict the sequence - making at most a finite number of errors.
Strings with high Kolmogorov complexity being "predicted" by trivial algorithms is quite compatible with this notion of "prediction".
So, above the last wrongly predicted output, the whole sequence is as complex as the (improved) predictor?