Again, I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on. This section is less complete than the others; missing text is indicated with brackets: [].

_____

 

 

From digital intelligence to intelligence explosion

Humans are the first terrestrial intelligences sophisticated enough to produce a technological civilization. It seems unlikely that we are near the ceiling of possible intelligences, rather than simply being the first such intelligence that happened to evolve. Computers far outperform humans in many narrow niches, and there is reason to believe that similar large improvements over human performance are possible for general reasoning, technology design, and other tasks of interest.

Below, we discuss the advantages of digital intelligence that will likely produce an intelligence explosion, along with the nature and speed of this “takeoff.”

 

Advantages from mere digitality

One might think the first human-level digital intelligence would not affect the world much. After all, we have seven billion human-level intelligences right now, and few or none of them have prompted massive, sudden AI breakthroughs. A single added intelligence might seem to be a tiny portion of the total drivers of technological innovation.

But while human-level humans do not suddenly lead to smarter AI, there are reasons to expect human-level digital intelligence to enable faster change. Digital intelligences can be ported across hardware platforms, and this has at least three significant consequences: speed, copyability, and goal coordination.

 

Speed

Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of the type of physiology we humans run on. In contrast, software minds could be ported to any available hardware, and could therefore think more rapidly as faster hardware became available. This is analogous to the way in which older video game systems can be emulated on (much faster) modern computers. 

Thus, if digital intelligence is invented that can think as fast as a human can, faster hardware would enable that same digital intelligence to think faster than a human. The speed of human thought would not be a stable resting place; it would be just one speed among many at which the digital intelligence could be run.

 

Copyability

Our colleague Steve Rayhawk likes to call digital intelligence “instant intelligence; just add hardware!” What Steve means is that while it will require extensive research to design the first digital intelligence, creating additional digital intelligences is just a matter of copying software. The population of digital minds can thus expand to fill the available hardware base, either through purchase (until the economic product of a new AI is less than the cost of the necessary computation) or through other means, for example hacking.

Depending on the hardware demands and the intelligence of initial digital intelligences, we might move fairly rapidly from the first human-level digital intelligence to a population of digital minds that outnumbers biological humans.

Copying also allows potentially rapid shifts in which digital intelligences, with which skills, fill the population. Since a digital intelligence’s skills are stored digitally, its exact current state can be copied, including memories and acquired skills. Thus if one digital intelligence becomes, say, 10% better at earning money per dollar of rentable hardware than other digital intelligences, then it could replace the others across the hardware base, for about a 10% gain in the economic productivity of those resources.

Digitality also opens up more parameters for variation. We can put humans through job-training programs, but we can’t do precise, replicable brain surgeries on them. Digital workers would probably be more editable than human workers are. In the case of whole brain emulation, we know that transcranial magnetic stimulation (TMS) applied to an area of the prefrontal cortex can improve working memory performance (Fregni et al. 2005). Since TMS works by temporarily decreasing or increasing the excitability of populations of neurons, it seems plausible that decreasing or increasing the “excitability” parameter of a certain population of (virtual) neurons in a digital mind could improve performance. We could also experimentally modify scores of other whole brain emulation parameters, e.g. simulated glucose levels, (virtual) fetal brain cells grafted onto particular brain modules, and rapid connections across different parts of the brain.1 A modular, transparent AI could be even more directly editable than a whole brain emulation — for example via its source code.

Copyability thus dramatically changes the payoff from job training or other forms of innovation. If a human spends his summer at a job training program, his own future productivity is slightly boosted. But now, consider a “copy clan” — a set of copies of a single digital intelligence. If a copy clan of a million identical workers allocates one copy to such training, the learned skills can be copied to the rest of the copy clan, for a return on investment roughly a million times larger.

 

Goal coordination

Depending on the construction of the AIs, each instance within a copy clan could either: (a) have separate goals, heedless of the (indexically distinct) goals of its “copy siblings”; or (b) have a shared external goal that all instances of the AI care about, independently of what happens to their own particular copy. From the point of view of the AIs' creators, option (b) has obvious advantages in that the copy clan would not face the politics, principal agent problems, and other goal coordination problems that limit human effectiveness (Friedman 1993). A human who suddenly makes 500 times a subsistence income cannot use this to acquire 500 times as many productive hours per day. An AI of this sort, if its tasks are parallelizable, can.

Any gains made by such a copy clan, or by a human or human organization controlling that clan, could potentially be invested in further AI development, allowing initial advantages to compound. The ease of copying skills and goals across digital media thus seems to lead to a world in which agents' intelligence, productivity levels, and goals are unstable and prone to monoculture.

 

Further advantages to digital intelligence

How much room is there for design improvements to the human brain? Likely, quite a bit. As noted above, AI designers could seek improved hardware or more efficient algorithms to enable increased speed. They could also search for “qualitative” improvements — analogs of the difference between humans and chimpanzees or mice that enable humans to do tasks that are likely impossible for any number of chimpanzees or mice in any amount of time, such as engineering our way to the moon. AI designers can make use of several resources that were less accessible to evolution:

Increased serial depth. Due to neurons’ slow firing speed, the human brain relies on massive parallelization and is incapable of rapidly performing any computation that requires more than about 100 sequential operations (Feldman and Ballard 1982). Perhaps there are cognitive tasks that could be performed more efficiently and precisely if the organic brain’s ability to support parallelizable pattern-matching algorithms were supplemented by and integrated with support for fast sequential processes?

Increased real-time introspective access, high-bandwidth communication, or consciously editable algorithms. We humans access and revise our cognitive processes largely by fixed and limited pathways. Perhaps digitally recording states, consciously revising a larger portion of one’s thought patterns, or sharing high-bandwidth states with other agents would increase cognitive capacity?

Increased computational resources.  The human brain is small relative to the systems that could be built. The brain’s approximately 85–100 billion neurons is a size limit imposed not only by constraints on head size and metabolism, but also by the difficulty of maintaining integration between distant parts of the brain given the slow speed at which impulses can be transmitted along neurons (Fox 2011). While algorithms would need to be changed in order to be usefully scaled up, one can perhaps get a rough feel for the potential impact here by noting that humans have about [number] times the brain mass of chimps [citation], and that brain mass and cognitive ability correlate positively, with a correlation coefficient of about 0.4, in both humans and rats. [cite humans study, cite rats study]

Improved rationality. Some economists model humans as Homo economicus: self-interested rational agents who do what they believe will maximize the fulfillment of their goals (Friedman 1953). Behavioral studies, in contrast, suggest that we are more like Homer Simpson (Schneider 2010): we are irrational and non-maximizing beings that lack consistent, stable goals (Stanovich 2010; Cartwright 2011). But imagine if you were an instance of Homo economicus. You could stay on that diet, spend all your time learning which practices will make you wealthy, and then engage in precisely those, no matter how tedious or irritating they are. Some types of digital intelligence, especially transparent AI, could be written to be vastly more rational than humans, and accrue the benefits of rational thought and action. Indeed, the rational agent model (using Bayesian probability theory and expected utility theory) is a mature paradigm in current AI design (Russel and Norvig 2010, ch. 2).

 

Recursive “self”-improvement

Once a digital intelligence becomes better at AI design work than the team of programmers that brought it to that point, a positive feedback loop may ensue. Now when the digital intelligence improves itself, it improves the intelligence that does the improving. Thus, if mere human efforts suffice to produce digital intelligence this century, a large population of sped-up digital intelligences may be able to create a rapid cascade of self-improvement cycles, enabling a rapid transition. Chalmers (this volume) discusses this process in some detail, so here we will make only a few additional points.

The term “self,” in phrases like “recursive self-improvement” or “when the digital intelligence improves itself,” is something of a misnomer. The digital intelligence could conceivably edit its own code while it is running, but it could also create a new intelligence that runs independently. These “other” digital minds could perhaps be designed to have the same goals as the original, or to otherwise further its goals. In any case, the distinction between “self”-improvement and other AI improvement does not matter from the perspective of creating an intelligence explosion. The significant part is only that: (a) within a certain range, many digital intelligences probably can design intelligences smarter than themselves; and (b) given the possibility of shared goals, many digital intelligences will probably find greater intelligence useful (as discussed in more detail in section 4.1).

Depending on the abilities of the first digital intelligences, recursive self-improvement could either occur as soon as digital intelligence arrives, or it could occur after human design efforts, augmented by advantages from digitality, create a level of AI design competence that exceeds the summed research power of non-digital human AI designers. In any case, at least once self-improvement kicks in, AI development need not proceed on the timescale we are used to in human technological innovation. In fact, as discussed in the next section, the range of scenarios in which takeoff isn’t fairly rapid appears to be small, although non-negligible.

 

[the next section under 'from digital intelligence to explosion' is very large, and not included here]

 

1 This third possibility is particularly interesting, in that many suspect that the slowness of cross-brain connections has been a major factor limiting the usefulness of large brains (Fox 2011).

New Comment
5 comments, sorted by Click to highlight new comments since:

I note that there's quite a lot of overlap between this section of the paper and my digital advantages paper - would it be too shameless to ask for a cite?

Not shameless. Telling researchers about more literature they can cite, especially in a new field with a dearth of literature, is called "welcome assistance." :)

This whole discussion needs to be preceded by a disclaimer section that explains that this isn't a list of predictions, but a list of lower-bound estimates, a form of Drexler's exploratory engineering.

rather than simply being the first such intelligence that happened to evolve.

Other species might have evolved sufficiently to have a technological civilization and merely lack appropriate circumstances/natural "infrastructure". I'm thinking of dolphins in particular. They can't use fire because they live in water, etc.

the advantages of digital intelligence that will likely produce an intelligence explosion

Better to frame it as factors a few of which would be sufficient if you are correct about them. A critic might think the argument depended on each factor.

Copying also allows potentially rapid shifts in which digital intelligences, with which skills, fill the population.

"Copying also allows potentially rapid shifts in the composition of the digital intelligence population, and consequently its aggregate skills."

such as engineering our way to the moon.

Add a mental achievement. Differential equations?

Perhaps digitally recording states, consciously revising a larger portion of one’s thought patterns, or sharing high-bandwidth states with other agents would increase capacity?

"Cognitive capacity" doesn't sound good. The first two gains are indirect gains of fungible cognitive resources by compressing things. The last isn't within just one mind. This is if I understand your meaning correctly, but I'm not confident of that.

Now when

Now whenever

Chalmers (this volume) discusses this process in some detail

This implies he is telling a story with conjunctions rather than describing how different possible paths lead to approximately the same place.

In any case, at least once self-improvement kicks in,

This is awkward because "In any case," is often followed by "at least," i.e. a phrase also meaning "at any rate."

"Once self-improvement kicks in, if not sooner, AI development need not proceed on the timescale we are used to in human technological innovation."

We could also experimentally modify scores of other whole brain emulation parameters, e.g. simulated glucose levels, (virtual) fetal brain cells grafted onto particular brain modules, and rapid connections across different parts of the brain.

This part (suddenly) assumes that we are talking about WBEs, which isn't true in other parts of the section. This way might be better:

In brain emulations, we could also experimentally modify scores of other parameters, e.g. simulated glucose levels, (virtual) fetal brain cells grafted onto particular brain modules, and rapid connections across different parts of the brain.