By “disruptions to scientific progress” we have in mind “external” disruptions like catastrophe or a global totalitarianism that prevents the further progress of science (Caplan, 2008). We do not mean to include, for example, Horgan’s (1997) hypothesis that scientific progress may soon stop because there will be nothing left to discover that can be discovered, which we find unlikely.
This sounds strange, as it feels to suggest that you find "global totalitarianism that prevents the further progress of science" more likely than "that scientific progress may soon stop because there will be nothing left to discover", both of which seem extremely improbable and thus hard to compare.
Maybe cite a deadly engineered pandemic as a better example with a short inferential distance (or even a civilization-destroying nuclear war, which seems unlikely, but more plausible than air-tight totalitarianism).
This sounds strange, as it feels to suggest that you find "global totalitarianism that prevents the further progress of science" more likely than "that scientific progress may soon stop because there will be nothing left to discover", both of which seem extremely improbable and thus hard to compare.
I don't know that it's their improbability that makes them hard to compare so much as the relative distance from "today" either event may be that's relevant.
Legg (2008) argues that many definitions of intelligence converge on this idea. We mean to endorse this informal definition, not Legg’s attempt to formalize intelligence in a later section of his manuscript.
Curious: is your lack of endorsement for Legg's formalization because you don't think that most readers would accept it, or because you find it flawed? (I always thought that his formalization was a pretty good one, and would like to hear about serious flaws if you think that such exist.)
You can substitute "utility" for "reward", if you prefer. Reinforcement learning is a fairly general framework, except for its insistence on a scalar reward signal. If you talk to RL folk about the need for multiple reward signals, they say that sticking that information in the sensory channels is mathematically equivalent - which is kinda true.
A digital intelligence need not be sentient, though, so long as it has a human-level capacity to achieve goals in a wide variety of environments.
This feels like presupposing that the idea of "being sentient" makes sense in the context of this discussion, which is a can of worms that I think shouldn't be opened. Better to excuse the distinction as irrelevant (which it is, for this purpose) if it's mentioned at all.
"Digital intelligence" seems like an odd choice of terms. Nothing in what you are talking about needs to be digital per se by any of the usual meanings of digital. It would certainly be strange if humans made such an object that wasn't digital but nothing in the definition requires it to be digital.
What about:
digital intelligence has certain advantages (e.g. copyability)
No degradation with iterative copying is a an advantage digital media is often thought to have over analog media. What I think they are trying to convey is perfect reproduction is possible and is a large advantage.
edit:spelling
Nothing in what you are talking about needs to be digital per se by any of the usual meanings of digital.
Even in the sense of "running on a digital computer"?
(Historical note: when Alan Turing proposed the Turing Test thought experiment to answer the question "Can machines think?", he defined "machine" to mean what he called a "digital computer", i.e. an implementation of a Turing machine.)
There are many types of digital intelligence. To name just four:
Readers might like to know what the others are and why you chose those four.
You might want to spell out what "digital intelligence" means earlier in the piece. E.g., "by digital intelligence we mean something that, like humans, can solve a wide range of problems (not just in one narrow domain like Watson), and that, unlike humans, can be run on a computer as software". In fact, "software intelligence" might be a more transparent term.
This paragraph:
To count as a "digital intelligence," an artificial agent must have at least a human-level general capacity3 to achieve its goals in a wide range of environments, including novel ones.
threw me off at first because the word "artificial" suggested it meant artificial intelligence in the sense of footnote 2.
The split into transparent and opaque as top-level "kinds of digital intelligence" seems unmotivated at this point, as if you said, "there are three kinds of higher animal: mammals, birds with light feathers, and birds with dark feathers".
To count as a "digital intelligence," an artificial agent must have at least a human-level general capacity to achieve its goals in a wide range of environments, including novel ones.
Yuck. There's nothing special about "human-level" that merits the use of such terminology. IMHO, "digital intelligence" should mean what it says, and no more.
We need not understand the detailed operation of a brain to reproduce it functionally on a computing substrate.
We don't necessarily need to understand the detailed operation of a brain to reproduce it functionally on a computing substrate.
We do not mean to include, for example, Horgan’s (1997) hypothesis that scientific progress may soon stop because there will be nothing left to discover that can be discovered, which we find unlikely.
What? Regardless of how likely you think it is, if it did happen, would it be a "disruption"? Would it be a circumstance preventing AI from developing within a century?
above the human level (a human brain running at 1000 times its normal speed).
Arguably not. At minimum emphasize it would have to be a very smart human.
An opaque AI is not transparent to its creators.
Cue Brooks's classical "Intelligence without representation".
digital intelligence has certain advantages (e.g. copyability)
It occurs to me that this isn't necessarily as obvious as it appears. With the advent of finely-granular bioprinting, it may very well occur (depending on what probability you assign to which priors) that humanity will achieve the first printed human brain before the first binary-digital computer human-equivalent program is compiled.
I also would not at all be surprised if we don't at some point between "there" and "here" see something akin to cloned rat-brains (printed or otherwise grown) that are augmented through direct mechanical/"cybernetic" interfaces in selectively tailored fashions. If we can train rats to puzzle their way through mazes, after all, then if we provide sufficient external memory and narrow AI to reduce the complexity of a given function to something the rat can handle, a rat's level of general intelligence can achieve significant things.
(If "cyber-rat AGI" seems insufficient to being useful, consider the degrees of separation between a partially-uplifted horse and a janitor.)
It occurs to me that this isn't necessarily as obvious as it appears. With the advent of finely-granular bioprinting, it may very well occur (depending on what probability you assign to which priors) that humanity will achieve the first printed human brain before the first binary-digital computer human-equivalent program is compiled.
The notion that you could scan in a living brain well enough to bioprint it, but not well enough to run it on a computer, seems far-fetched.
The notion that you could scan in a living brain well enough to bioprint it, but not well enough to run it on a computer, seems far-fetched.
Depends on a number of variables. One; it might be that printing is easier than emulating in terms of resources used for similar structures. Native implementations certainly run "faster" than virtualizations even when discussing operating systems (that difference is getting less every day, though) -- but even so, the current track for operational "human-emulation" AGI puts decades away from having massive, power-gobbling supercomputers that can successfully emulate a human mind in near-analogous-to-human-speeds of cognition. Second; if all that's being printed is the functionality but not the specific structures -- that is, the 'blank anatomy' -- which can then be trained through more "narrow" systems into a specific goal-state... while this wouldn't be a blanket "I can print up as many Logos01's as you've got bullets to shoot at him" -- it would certainly be a highly useful thing to be able to do. (Especially in the case of 'uplifted' animals where moral quandaries regarding personhood are lesser; we already use work-animals and food-animals. This is just an extrapolation of that same process.) Third; my assertion was "printed human brain", not "bio-printed duplicate of a person" -- and none of this even begins to touch on the problem of scanning a living brain.
I will say that if we can non-destructively scan a living brain to the point of achieving fidelity sufficient to encapsulate the person whose neuronal activity resides within said brain, we will likely be able to control bio-printing to produce viable biological replications. The interesting question here seems to be: "where will emulation efforts be in comparison to that point in time"?
I could easily see something akin to a layer-by-layer printing process that included bioscaffolding and temporary synaptic substitutes being integrated with a bioprinting process sometime within thirty years. (Although I strongly doubt it'll be in widespread use for humans, and it almost-definitely won't be in use replicating living humans) -- but I just don't have sufficient faith in the brute-forcing ("emulation") approach being capable of producing human-equivalent AIs in a space and power-consumption equivalent to that of a biological brain within the same window.
Perhaps I'm too jaded; perhaps the coming diamondoid-substrate (hydrogen-doped graphene in labs has shown massive computing potential) electronics revolution will truly change the game. But right now, if I had to put money on which 'strategy' would produce "intelligence on demand" sooner, I'd say the bio-printing approach.
Especially if it is hybridized a la animats -- since that's already something we're doing in laboratories.
Second; if all that's being printed is the functionality but not the specific structures -- that is, the 'blank anatomy' -- which can then be trained through more "narrow" systems into a specific goal-state... [...] Third; my assertion was "printed human brain", not "bio-printed duplicate of a person"
Okay, I'll grant that it might be used to create rough blank anatomies. But in that case, I doubt that the end result would be all that much different from that of a newborn's brain. Maybe you could put some of the coarse-level structure already in place so that the childhood would go faster, but extrapolating the consequences of something like that would require either lots of trial and error, or hugely advanced neuroscience, far better than what would be required for uploading.
I expect that the legal hurdles would also be quite immense. For uploading, the approaches that currently look realistic are either preserving and destructively scanning the brain of a recently-deceased person, or getting an exocortex and gradually having your mind transition over. Both can be done with the consent of the person in question. But if you're printing a new brain, that's an experimental process creating a new kind of mind that might very well end up insane or mentally disabled. I don't think that any ethics committee would approve that. Seeing that even human reproductive cloning, which is far less freaky than this, has already been banned by e.g. the European Union and several US states, I expect that this would pretty quickly end up legislated into non-existence.
even so, the current track for operational "human-emulation" AGI puts decades away from having massive, power-gobbling supercomputers [...] I could easily see something akin to a layer-by-layer printing process that included bioscaffolding and temporary synaptic substitutes being integrated with a bioprinting process sometime within thirty years.
Data points:
Thirty years from now would be 2041. The people participating in the whole brain emulation roadmap workshop thought that the necessary level of abstraction needed to emulate a human brain in software would be somewhere at their level 4-6 (see page 13). The same roadmap estimates (on page 80) that for people willing to purchase a $1 million supercomputer, the necessary computing power would become available in 2019 (level 4), 2033 (level 5), or 2044 (level 6). (Presuming that current hardware trends keep up.)
You might very well be right that there still won't be enough computing power in thirty years to run emulations - at least not very many of them. An exocortex approach also requires a considerable fraction of the necessary computing power being carried inside a human body. It might very well take up to 2100 or so before that level of computing power is available.
On the other hand, it could go the other way around. All extrapolations to such a long time away are basically wild guesses anyway.
The same roadmap estimates (on page 80) that for people willing to purchase a $1 million supercomputer, the necessary computing power would become available in 2019 (level 4), 2033 (level 5), or 2044 (level 6). (Presuming that current hardware trends keep up.)
I cannot help but note that if these predictions are meant to represent "real-time" human-thought-speed equivalence... then I find them to be somewhat... optimistic. The Blue Brain project's neural emulation is ... what, 10% the speed of a human's neurons? I recall the 'cat brain' fiasco had it at 1/83rd the equivalent processing speed.
-- side note: In reading your responses it seems to me that you are somewhat selectively focused on human minds / brains. Why is that? Do you consider the notion of 'uplifted' bioprinted animals unworthy of discussion? It seems the far more interesting topic, to me. (For example; in laboratories we have already "emulated" memories and recall capabilities for rats. In ten year's time I have a strong suspicion that it should be 'feasible' to develop smaller animals on this order that are integrated with devices meant to augment their memory capabilities and direct their cognition towards specific tasks. These could then be used as superagents governing 'narrow' AI functionality -- a sort of hybrid between today's commercial AI offerings (think "siri") and truly general artificial intelligence.)
I could imagine, furthermore, scenarios where emulated lower-mammal brains (or birds -- they seem to do more with fewer neurons already, given the level of intelligence they evince despite the absence of a cerebral cortex) are hooked into task-specific equivalents of iOS's 'Siri' or Babelfish or Watson. While not sufficient to the task of permitting human uploads -- it still occurs to me that an emulated parakeet hooked up to Watson would make a rather useful receptionist.
I don't doubt that animals could be used the way you describe, but they sound to me like they'd be just another form of narrow AI. Yes, possibly much more advanced and useful than what we have today, but then the "conventional" narrow AI at the time when we have such technology will also be much more advanced and useful. If we're still talking about the 30 year timeframe, I'm not sure if there's a reason to presume that animal brains can still do things in 30 years that computers can't.
Remember that emulation involves replicating many fine details instead of abstracting them away - if we just want to make a computer that's functionally equivalent, the demands on processing power are much lower. We'd have the hardware for that already, we just need to figure out the software. And we're already starting to have e.g. neural prostheses for hippocampal and cerebellum function in rats, so the idea of the software being on the needed level in 30 years doesn't seem like that much of a stretch.
Remember that emulation involves replicating many fine details instead of abstracting them away - if we just want to make a computer that's functionally equivalent,
While thirty years is in fact a long time away, I am not at all confident that we will be able to emulate cognition in a non-neural environment within that window. ( The early steel crafters could make high-quality steel quite reliably... by just throwing more raw resources at the problem.)
What I'm saying is that the approach of emulating function rather than anatomy has been around since the early days of Minsky and is responsible for sixty year's worth of "AI is twenty years away" predictions. I admit the chances are high that this sentiment is biasing me in favor of the animat approach being succesful earlier than full-consciousness emulation.
I don't doubt that animals could be used the way you describe, but they sound to me like they'd be just another form of narrow AI.
I tend to follow the notion of the "Society of Mind"; consciousness as a phenomenon that arises from secondary-agents within the brain. (Note: this is not a claim that consciousness is an 'illusion'; but rather that there are steps of intermediate explanation between consciousness and brain anatomy. Much as there are steps of intermediate explanation between atoms and bacteria.)
While horses and parrots might be weakly generally-intelligent, they are generally-intelligent; they possess a non-zero "g" factor. Exploiting this characteristic by integrating various narrow AIs into it would retain that g while providing socioeconomic utility. And that's an important point: narrow AI is not merely poor at differentiating which inputs are applicable to it from which are not -- it is incapable of discerning when its algorithms are useful. A weak possession of g in combination with powerful narrow AIs would provide a very powerful intermediary step between "here" and "there" (where "there" is fully synthetic AgI where g is mechanically understood.)
Side note:
And we're already starting to have e.g. neural prostheses for hippocampal and cerebellum function in rats,
-- I'm confused as to how we could be using the same fact to support opposing conclusions.
Again, I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on.
_____
From here to digital intelligence
Our first step is to survey the evidence suggesting that, barring global catastrophe and other disruptions to scientific progress,1 there is a significant probability we will see the creation of digital intelligence2 within a century.
Why focus on digital intelligence instead of, say, the cognitive enhancement of biological humans? As we discuss in a later section, digital intelligence has certain advantages (e.g. copyability) that make it likely to lead to intelligence explosion.
Below, we discuss the different types of digital intelligence, what kinds of progress are likely to push us closer to digital intelligence, and how to estimate the time at which digital intelligence will arrive.
Types of digital intelligence
To count as a "digital intelligence," an artificial agent must have at least a human-level general capacity3 to achieve its goals in a wide range of environments, including novel ones.4
IBM's Jeopardy!-playing Watson computer is not a digital intelligence in this sense because it can only solve a narrow problem. Imagine instead a machine that can invent new technologies, manipulate humans with acquired social skills, and otherwise learn to navigate new environments on the fly. A digital intelligence need not be sentient, though, so long as it has a human-level capacity to achieve goals in a wide variety of environments.
There are many types of digital intelligence. To name just four:
Notes for this snippet
1 By “disruptions to scientific progress” we have in mind “external” disruptions like catastrophe or a global totalitarianism that prevents the further progress of science (Caplan, 2008). We do not mean to include, for example, Horgan’s (1997) hypothesis that scientific progress may soon stop because there will be nothing left to discover that can be discovered, which we find unlikely.
2 We introduce the term “digital intelligence” because we want a new term that refers to both human-level AI and whole brain emulations, and we don’t wish to expand the meaning of the common term "AI."
3 The notion of "human-level intelligence" is fuzzy, but nevertheless we can identify clear examples of intelligences below the human level (rhesus monkeys) and above the human level (a human brain running at 1000 times its normal speed). A human-level intelligence is any intelligent system not clearly below or above the human level.
4 Legg (2008) argues that many definitions of intelligence converge on this idea. We mean to endorse this informal definition, not Legg’s attempt to formalize intelligence in a later section of his manuscript.
5 Examples include many of today’s reinforcement learners (Sutton and Barto 1998), and also many abstract models such as AIXI (Hutter 2004), Gödel machines (Schmidhuber 2007), and Dewey’s (2011) “implemented agents.”
References for this snippet