I mostly agree with the things in that link, but I also want to say that at the end of the day, I think it’s fine and healthy in general for someone to describe their beliefs in terms of a probability distribution even when they have very little to go on. So in this particular case, if someone says “I think AGI will probably (>80%) come in the next 10 years”, I would say “my own beliefs are different from that”, but I would not describe that person as “overconfident”, per se. It’s not like there’s a default timeline probability distribution, and this default is mostly >10 years, and you need great personal confidence & swagger to overrule that default. There is no default! The link agrees that the arguments for long timelines are just as sketchy as the arguments for short timelines. It’s still appropriate for people to do the best they can to form probabilistic beliefs.
This article is extremely wrong and kinda a waste of text.
Breaking down the author's points:
I think maybe this was mistitled. It seems to make a solid argument against certainty in AI timelines. It does not argue against the attempt or against taking seriously the distribution across attempts.
It raises the accuracy of some predictions of space flight, then notes that others were never implemented. It could well be that there are multiple viable ways to build a rocket, a steam engine, and an AGI.
Von Braun would weep at our lack of progress on space flight. But we did not progress because there don't actually seem to be near term economic incentives. There probably are for AGI.
Timelines are highly uncertain, but dismissing the possibility of short timelines makes as little sense as dismissing long timelines.
Synopsis as tweeted by the author: "Some of my friends are very invested in predicting when AGI is supposed to arrive. The history of technology development shows that you can't time things like this."