The Journal of Experimental & Theoretical Artificial Intelligence has - finally! - published our paper "The errors, insights and lessons of famous AI predictions – and what they mean for the future":

Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions in the Age of Spiritual Machines, and Omohundro's ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.

The paper was written by me (Stuart Armstrong), Kaj Sotala and Seán S. Ó hÉigeartaigh, and is similar to the series of Less Wrong posts starting here and here.

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 12:19 PM
[-][anonymous]10y90

Well done getting published in the Journal! That is an interesting read and it was nice to see the predictions graphed which really highlights the range of results.

In terms of the paywalled site, could you put a copy (or draft copy if there are copyright issues) on a more public site, like MIRI? I could read it through my work, but cant distribute as it is watermarked with employers name on each page.

The paper was written by me (Stuart Armstrong), Kaj Sotala and Seán S. Ó hÉigeartaigh

That's very generous of Stuart to say, but in all honesty my name's mostly just there because the paper draws on the set of predictions that I helped classify. I didn't have very much to do with the actual writing of it.

I remember having many useful conversations about the data and how to analyse it.

Huh. I don't remember having said much that would have been particularly valuable for the stuff in this particular paper, but if you disagree, I guess I'll go along with that. :-)

Paywalled :(

I'm happy to send a copy to anyone who messages me. Stuart, my understanding of the legalese is that we could put up the preprint/final draft pdf - would MIRI be willing to host it?