Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
The intuitive explanation for this behavior where the normal distribution drops off faster is because it makes such strong predictions about the region around 2050 and once you've reached 2070 with no AI, you've 'wasted' most of your possible drawers, to continue the original blog post's metaphor.
To get a visual analogue of the probability mass, you could map the normal curve onto a uniform distribution, something like 'if we imagine each year at the peak corresponds to 30 years in a uniform version, then it's like we were looking at the period 1500-2100AD, so 2070 is very late in the game indeed!' To give a crude ASCII diagram, the mapped normal curve would look like this where every space/column is 1 equal chance to make AI: