Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
My main point regarding the advantage of being "irrational" was that if we would all think like perfect rational agents, e.g. closer to how Eliezer Yudkowsky thinks, we would have missed out on a lot of discoveries that were made by people pursuing “Rare Disease for Cute Kitten” activities.
How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?
What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?