Again, I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on. This section is less complete than the others; missing text is indicated with brackets: [].
_____
We do not know what it takes to build a digital intelligence. Because of this, we do not know what groundwork will be needed to understand intelligence, nor how long it may take to get there.
Worse, it’s easy to think we do know. Studies show that except for weather forecasters (Murphy and Winkler 1984), nearly all of us give inaccurate probability estimates when we try, and in particular we are overconfident of our predictions (Lichtenstein, Fischoff, and Phillips 1982; Griffin and Tversky 1992; Yates et al. 2002). Experts, too, often do little better than chance (Tetlock 2005), and are outperformed by crude computer algorithms (Grove and Meehl 1996; Grove et al. 2000; Tetlock 2005). So if you have a gut feeling about when digital intelligence will arrive, it is probably wrong.
But uncertainty is not a “get out of prediction free” card. You either will or will not save for retirement or support AI risk mitigation. The outcomes of these choices will depend, among other things, on whether digital intelligence arrives in the near future. Should you plan as though there are 50/50 odds of reaching digital intelligence in the next 30 years? Are you 99% confident that digital intelligence won’t arrive in the next 30 years? Or is it somewhere in between?
Other than using one’s guts for prediction or deferring to an expert, how might one estimate the time until digital intelligence? We consider several strategies below.
Time since Dartmouth. We have now seen 60 years of work toward digital intelligence since the seminal Dartmouth conference on AI, but digital intelligence has not yet arrived. This seems, intuitively, like strong evidence that digital intelligence won’t arrive in the next minute, good evidence it won’t arrive in the next year, and significant but far from airtight evidence that it won’t arrive in the next few decades. Such intuitions can be formalized into models that, while simplistic, can form a useful starting point for estimating the time to digital intelligence.1
Simple hardware extrapolation. Vinge (1993) wrote: “Based on [hardware trends], I believe that the creation of greater-than-human intelligence will occur [between 2005 and 2030].” Vinge seems to base his estimates on estimates of the “raw hardware power that is present in organic brains.” In a 2003 reprint of his article, Vinge notes the insufficiency of this reasoning: even if we have the hardware sufficient for AI, we may not have the software problem solved.
Extrapolating the requirements for whole brain emulation. One way to solve the software problem is to scan and emulate the human brain. Thus Ray Kurzweil (2005) extrapolates our progress in hardware, brain scanning, and our understanding of the brain to predict that (low resolution) whole brain emulation can be achieved by 2029. Many neuroscientists think this estimate is too optimistic, but the basic approach has promise.
Tracking progress in machine intelligence. Many folks intuitively estimate the time until digital intelligence by asking what proportion of human abilities today’s software can match, and how quickly machines are catching up. However, it is not clear how to divide up the space of “human abilities,” nor how much each one matters. We also don’t know whether machine progress will be linear or include a sudden jump. Watching an infant’s progress in learning calculus might lead one to conclude the child will not learn it until the year 3000, until suddenly the child learns it in a spurt at age 17. Still, machine progress in chess performance has been regular,2 and it may be worth checking whether a measure can be found for which both: (a) progress is smooth enough to extrapolate; and (b) when performance rises to a certain level, we can expect digital intelligence.3
Estimating progress in scientific research output. Imagine a man digging a ten-kilometer ditch. If he digs 100 meters in one day, you might predict the ditch will be finished in 100 days. But what if 20 more diggers join him, and they are all given steroids? Now the ditch might not take so long. Analogously, when predicting progress toward digital intelligence it may be useful to consider not how much progress is made per year, but instead how much progress is made per unit of research effort. Thus, if we expect jumps in the amount of effective research effort (for reasons given in section 2.2.), we should expect analogous jumps in progress toward digital intelligence.
Given the long history of confident false predictions within AI, and the human tendency toward overconfidence in general, it would seem misguided to be 90% confident that AI will succeed in the coming decade.4 But 90% confidence that digital intelligence will not arrive before the end of the century also seems wrong, given that (a) many seemingly difficult AI benchmarks have been reached, (b) many factors, such as more hardware and automated science, may well accelerate progress toward digital intelligence, and (c) whole brain emulation may well be a relatively straightforward engineering problem that will succeed by 2070 if not 2030. There is a significant probability that digital intelligence will arrive within a century, and additional research can improve our estimates (as we discuss in section 5).
It would be nice if the Time Since Dartmouth analysis wasn't so simple. Instead of statistically independent weighted trials, maybe take some inspiration from the hope function discussion here.