It seems that there are two points of particular relevance in predicting AGI timelines: (i) the expectation, or the point at which the chance of AGI is believed to be 50% and (ii) the last date as of which the chance of AGI is believed to be insignificant.  

For purposes of this post, I am defining AGI as something that can (i) outperform average trained humans on 90% of tasks and (ii) will not routinely produce clearly false or incoherent answers.  (I recognize that this definition is somewhat fuzzy with trained and tasks both being terms susceptible to differing interpretations and difficulty in application;  AGI, like obscenity, lends itself to a standard of "I'll know it when I see it.")

Recent events have lead me to update my timelines.  Like most everyone I am aware of, my timeline has shortened.  (And, obviously, the facts that:  (i) updates across people seem to be moving consistently in one direction (though I am not aware of any detailed studies of this) and (ii) my own updates have moved consistently in one direction, suggest that the estimates may be biased.)

The date by which I think there is a 50% chance of AGI is now solidly in the 2030s instead of the 2040s.  This doesn't seem to be that significant a change, though more time to prepare is likely better than less.  Our civilizational capacity is unfortunately unlikely to materially increase between 2035 and 2045.

Far more importantly, last year at this time I was confident there was essentially no chance AGI would be developed before January 1, 2029.   Four months ago, I was confident there was essentially no chance AGI would be developed before July 1, 2027. But now, there is no longer a date with which I can complete the sentence "I am confident there is essentially no chance AGI will be developed before...".   

To be sure, I think the chance that AGI will be developed before January 1, 2029 is still low, on the order of 3% or so; but there is a pretty vast difference between small but measurable and "not going to happen".

New Comment
5 comments, sorted by Click to highlight new comments since:

"It's not the end of the world... but you can see it from here."

The ChatGPT impressiveness we are witnessing is calculated in real-time using relatively little hardware. Somewhere behind the scenes, all kinds of people and organizations are messing with ChatGPT+++. Striking gold, hitting "true intelligence", might take another decade, or two, or three, or hundred. It might happen in a month. Or tomorrow. Or in a second. AI-go-FOOM could happen every moment and it's only becoming more likely. 

The French revolution and the subsequent devastating Napoleonic wars happened during the early onset of steam machines and the Industrial Revolution. Barely anybody links these events though, and certainly nobody from that time period that I have read about. To what extent is our current social unrest/inflation/Ukraine war already caused by exponentionally advancing technology? 

I think there's a <1% chance that AGI takes >100 years, and almost all of that probability is humanity destroying itself in other ways, like biorisk or nuclear war or whatever.

How calibrated are you on 1% predictions?

For purposes of this post, I am defining AGI as something that can (i) outperform average trained humans on 90% of tasks and (ii) will not routinely produce clearly false or incoherent answers.

Based on this definition, it seems like AGI almost or already exists. ChatGPT is arguably already an AGI because it can, for example, score 1000 on the SAT which is at the average human level.

I think a better definition would be a model that can outperform professionals at most tasks. For example, a model that's better at writing than a New York Times human writer.

To be sure, I think the chance that AGI will be developed before January 1, 2029 is still low, on the order of 3% or so; but there is a pretty vast difference between small but measurable and "not going to happen".

Even if one doesn't believe ChatGPT is an AGI, it doesn't seem like we need much additional progress to create a model that can outperform the average human at most tasks. 

I personally think there is a ~50% chance of this level of AGI being achieved by 2030.