It's bumpy because either "normal" Deep Learning progress will get us there or there is a big roadblock ahead that will require a major scientific breakthrough.
The Deep Learning scenario creates a bump within the next two decades I would say.
Whole brain simulation could create another bump but I don't know where.
The "major scientific breakthrough" scenario doesn't create a bump. It could've happened yesterday.
It's hard to come up with a reasonable probability distribution for a one-off event, not clear what the reference class might be. But my guess is that it would be some form of the power law, because it is universal and scale-independent. No idea about the power exponent though.
For example, you might think:
For that matter, any non-exponential distribution has this property, where the non-occurrence of the event by a certain time will change your expectation of it going forward. I'm curious if people think this is the case for AGI, and if so, why. (Also curious if this question has been asked before.)