"How We're Predicting AI — or Failing to"

11 Post author: lukeprog 18 November 2012 10:52AM

The new paper by Stuart Armstrong (FHI) and Kaj Sotala (SI) has now been published (PDF) as part of the Beyond AI conference proceedings. Some of these results were previously discussed here. The original predictions data are available here.

Abstract:

This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analysing them. It will propose a variety of theoretical tools for analysing, judging and improving these predictions. Focusing specically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are born out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.

Comments (18)

Comment author: gwern 18 November 2012 06:47:10PM 4 points [-]

It's a good paper overall, and I'm glad to see it's been published - especially the Maes-Garreau material! (I wonder what Kevin Kelly made of our results? His reaction would've been neat to mention.)

But reading it all in one place, I think one part seems pretty weak: the criticizing of the 'expert' predictions. It seems to me like there ought to be more rigorous forms of assessment, and I wonder about possible explanations for the clumping at 20+ years: the full median-estimate graph seems to show a consistent expert trend post-1970s to put AI at x-2050 (I can't read the dates because the graphs are so illegible, what the heck?) and also many recent predictions. Perhaps there really is a forming expert consensus and the clump is due to the topic gaining a great deal of attention recently, and then the non-expert predictions are just taking their cue from the experts (as one would hope!)

Comment author: rompi 18 November 2012 07:09:14PM 4 points [-]

Hi

Re the graph quality: I'm REALLY sorry and I have to apologize to Stuart for the poor quality of images - it's kind of my fault... When I typesetted the final version of the proceedings, they were in A5 format, but on A4 page. We sent it to the printing company and they ran it through some program that cropped the pages to A5. Alas, this program also terribly compressed the images and I didn't check it carefully before letting them print it. So this is it... Once more sorry about that.

The only thing I can do is to fix it in this electronic version - will be done asap.

Anyway, thanks Stuart for your great talk!

Best wishes

Jan Romportl

Comment author: gwern 18 November 2012 07:19:30PM *  2 points [-]

Well, at least it's partially fixed... (Actually this reminds me that, as ElGalambo pointed out earlier, I should update the Wikipedia Maes-Garreau article.)

Comment author: Stuart_Armstrong 19 November 2012 12:52:10PM 3 points [-]

The original data can be found via: http://lesswrong.com/lw/e79/ai_timeline_prediction_data/

(much better to use that than to squint at the pictures!)

My subjective impressions: predictors very rarely quote or reference each other when making predictions. Many predictions seem purely an individual guess. I've seen no sign of an expert consensus, or of much experts critiquing or commending each other's work. I really feel that predicting AI has not been seen as something where anyone should listen to other people's opinions. There are some exceptions - Kurzweil, for instance, seems famous enough that people are willing to quote his estimates, usually to claim he got it wrong - but too few.

Comment author: gwern 19 November 2012 05:20:27PM 2 points [-]

My subjective impressions: predictors very rarely quote or reference each other when making predictions. Many predictions seem purely an individual guess. I've seen no sign of an expert consensus, or of much experts critiquing or commending each other's work. I really feel that predicting AI has not been seen as something where anyone should listen to other people's opinions.

They may not cite each other, but the influence can still be there as background reading etc. I may not cite Legge when I think there's a good chance of breakthroughs in the 2020s but the influence is there (well, it was until I mentioned him just now). To give a real-world example, compiling http://www.gwern.net/2012%20election%20predictions I know that the forecasters were all reading each others' blogs or twitters etc because in scouring their sites I see enough cross-links or similar topics, but anyone who looked at just the relevant pages of predictions or prediction CSVs would miss that completely and think they were deriving their similar predictions from independent models.

I think there's a lot of shared ideas and reading which rarely is explicitly cited in the same passage as a specific prediction with the exception of really offensive estimates like Kurzweil's self-promoting (have you been reading the reviews of his latest book? Everyone's dragging out Hofstadter's old dog shit quote, which one can't help but feel that he would not have been so explicit and crude if Kurzweil didn't really rub him the wrong way). But I don't know how one would test the consensus idea other than waiting and seeing whether expert predictions continue to cluster around 2040 even as we hit 2020s and 2030s.

Comment author: Kaj_Sotala 20 November 2012 05:17:32AM 1 point [-]

I'm actually thinking that the "non-experts were no better than experts" bit is maybe a little misleading, as I remember seeing a lot of the non-experts base their predictions on what experts had been saying.

Comment author: Stuart_Armstrong 20 November 2012 08:15:27AM 0 points [-]

Really? That wasn't my recollection. But you probably saw the data more than I did, so I'll bear that in mind in future!

Comment author: rompi 22 November 2012 12:49:02PM 0 points [-]

The link now points to the fixed proceedings (better image resolution). Sorry once again. Jan

Comment author: beoShaffer 18 November 2012 07:57:57PM 1 point [-]

Overall, a very good paper, both from an AI perspective and in terms of demonstrating how to apply various epistemic techniques that aren't nearly as widespread as they should be. However, I have seen a few typos and other problems. The bottom of page 64 says,"Moore’s law could be taken as an ultimate example of grid: " I think that should be grind. Also, I liked

Care must be taken when applying this method: the point is to extract a useful verifiable prediction, not to weaken or strengthen a reviled or favoured argument. The very first stratagems in Shopenhauer’s “The Art of Always being Right” [17] are to extend and over-generalise the consequences of your opponent’s argument; conversely, one should reduce and narrow down one’s own arguments. There is no lack of rhetorical tricks to uphold one’s own position, but if one is truly after the truth, one must simply attempt to find the most reasonable empirical version of the argument; the truth-testing will come later.

But wish the paper had been slightly more specific about how the authors avoided this failure mode.

Comment author: mytyde 20 November 2012 08:49:34AM *  -1 points [-]

Yudkowsky recently posted something interesting on this, let me see if I can find it...