Through a series of diagrams, this article will walk through key concepts in Nick Bostrom’s Superintelligence. The book is full of heavy content, and though well written, its scope and depth can make it difficult to grasp the concepts and mentally hold them together. The motivation behind making these diagrams is not to repeat an explanation of the content, but rather to present the content in such a way that the connections become clear. Thus, this article is best read and used as a supplement to Superintelligence.

 

Note: Superintelligence is now available in the UK. The hardcover is coming out in the US on September 3. The Kindle version is already available in the US as well as the UK.


Roadmap: there are two diagrams, both presented with an accompanying description. The two diagrams are combined into one mega-diagram at the end.

 

 

 

Figure 1: Pathways to Superintelligence

 

 

Figure 1 displays the five pathways toward superintelligence that Bostrom describes in chapter 2 and returns to in chapter 14 of the text. According to Bostrom, brain-computer interfaces are unlikely to yield superintelligence. Biological cognition, i.e., the enhancement of human intelligence, may yield a weak form of superintelligence on its own. Additionally, improvements to biological cognition could feed back into driving the progress of artificial intelligence or whole brain emulation. The arrows from networks and organizations likewise indicate technologies feeding back into AI and whole brain emulation development.

 

Artificial intelligence and whole brain emulation are two pathways that can lead to fully realized superintelligence. Note that neuromorphic is listed under artificial intelligence, but an arrow connects from whole brain emulation to neuromorphic. In chapter 14, Bostrom suggests that neuromorphic is a potential outcome of incomplete or improper whole brain emulation. Synthetic AI includes all the approaches to AI that are not neuromorphic; other terms that have been used are algorithmic or de novo AI.

 

Figure 1 also includes some properties of superintelligence. In regard to its capabilities, Bostrom discusses software and hardware advantages of a superintelligence in chapter 3, when describing possible forms of superintelligence. In chapter 6, Bostrom discusses the superpowers a superintelligence may have. The term “task-specific superpowers” refers to Table 8, which contains tasks (e.g., strategizing or technology research), and corresponding skill sets (e.g., forecasting, planning, or designing) which a superintelligence may have. Capability control, discussed in chapter 9, is the limitation of a superintelligence’s abilities. It is a response to the problem of preventing undesirable outcomes. As the problem is one for human programmers to analyze and address, capability control appears in green.


In addition to what a superintelligence might do, Bostrom discusses why it would do those things, i.e., what its motives will be. There are two main theses—the orthogonality thesis and the instrumental convergence thesis—both of which are expanded upon in chapter 7. Motivation selection, found in chapter 9, is another method to avoid undesirable outcomes. Motivation selection is the loading of desirable goals and purposes into the superintelligence, which would potentially render capability control unnecessary. As motivation selection is another problem for human programmers, it also appears in green.

 

 

 

Figure 2: Outcomes of Superintelligence

 

 

Figure 2 maps the types of superintelligence to the outcomes. It also introduces some terminology which goes beyond general properties of superintelligence, and breaks up the types of superintelligence as well. There are two axes which divide superintelligence. One is the polarity, i.e., the possibility of a singleton or multipolar scenario. The other is the difference between friendly and unfriendly superintelligence. Polarity is slightly between superintelligence properties and outcomes; it refers to a combination of human actions and design of superintelligence, as well as actions of a superintelligence. Thus, polarity terms appear in both the superintelligence and the outcomes areas of Figure 2. Since safety profiles are a consequence of many components of superintelligence, those terms appear in the outcomes area.

 

Bostrom describes singletons in the most detail. An unfriendly singleton leads to existential risks, including scenarios which Bostrom describes in chapter 8. In contrast, a friendly superintelligence leads to acceptable outcomes. Acceptable outcomes are not envisioned in as great detail as existential risks; however, chapter 13 discusses how a superintelligence operating under coherent extrapolated volition or one of the morality models would behave. This could be seen as an illustration of what a successful attempt at friendly superintelligence would yield. A multipolar scenario of superintelligence is more difficult to predict; Bostrom puts forth various visions found in chapter 11. The one which receives the most attention is the algorithmic economy, based on Robin Hanson’s ideas.


 

 

Figure 3: A Visualization of Nick Bostrom’s Superintelligence

 

 

Finally, figures 1 and 2 are put together for the full diagram in Figure 3. As Figure 3 is an overview of the book's contents, it includes chapter numbers for parts of the diagram. This allows Figure 3 to act as a quick reference and guide readers to the right part of Superintelligence for more information.


Acknowledgements

 

Thanks to Steven Greidinger for co-creating the first versions of the diagrams, to Robby Bensinger for his insights on the final revisions, to Alex Vermeer for his suggestions on early drafts, and to Luke Muehlhauser for the time to work on this project during an internship at the Machine Intelligence Research Institute.

New to LessWrong?

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 11:21 AM

Moved to Main and Promoted.

Good work!

Not sure if you're planning to make further clarifications to the visualizations and the post, but one suggestion would be to introduce a new arrow (or arrows) showing that multipolar scenarios may very well resolve into a unipolar outcome after not much time (decades or centuries). This provides one major justification for the book's focus on singleton scenarios, another justification being that singleton scenarios are easier to analyze.

According to Bostrom, brain-computer interfaces are unlikely to yield superintelligence.

This strikes me as slightly surprising. From a technological standpoint, I would have thought that a hybrid of biological and machine intelligence would be likely to have the best aspects of each, by which I mean the best aspects at the time at which the BCI is created, rather than trying to posit that biological intelligence has any fundamental advantage that sufficiently advanced computers can never overcome. A fairly close analogy is how teams of a competent chessplayer and a laptop chess program can beat both the best humans and computers with far more processing power.

Admittedly I don't know much about BCI technology, but I have heard promising things about optogenetics. Having to undergo brain surgery is a problem, but the extent of this problem seems to depend upon to what extent the interface needs to penetrate into the brain rather than just overlying the surface. If bootstrapping to greater levels of intelligence required repeated surgery to install better BCIs then this might be problematic, but intelligence gains could also be realised by working on the software, or adding more hardware, or adding more people to a swarm intelligence.

Of course, in the end it would transition to a fully or mostly machine intelligence, either through offloading increasingly more cognition to the machine components until the organic brains were only a tiny fraction of the mind, or though using the increased intelligence to develop FAI/WBE. But that doesn't make BCI a dead end, so much as a transnational stage.

Finally, in the last few years, Moore's law has started to show signs of slowing, and this should cause one to update in favour of BCI coming first, as it is probably the path least dependent upon raw computing power (unless de novo AI turns out to be far more computationally efficient than the brain).

As far as social constraints go, I don't think it would be all that hard to find volunteers, and in fact there is a natural progression from treatment of blindness, mental illnesses and so forth through to transhumanism. Legal challenges are perhaps a more likely problem, but as previously mentioned, medical use will likely provide the precedent to grandfather it in.

Note I'm not saying that this is necessary a desirable path - FAI is preferable - I'm arguing it seems at least somewhat plausible to come first. Having said that, in the event that progress on FAI is slower and other existential threats loom, than BCI could perhaps be a sensible backup plan.

[-][anonymous]10y130

Here are some relevant blockquotes of Bostrom's reasoning on brain-computer interfaces, from Superintelligence chapter 2:

It is sometimes proposed that direct brain–computer interfaces, particularly implants, could enable humans to exploit the fortes of digital computing—perfect recall, speedy and accurate arithmetic calculation, and high-bandwidth data transmission—enabling the resulting hybrid system to radically outperform the unaugmented brain.64 But although the possibility of direct connections between human brains and computers has been demonstrated, it seems unlikely that such interfaces will be widely used as enhancements any time soon.65

To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. ... One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls. Treated subjects also reported more cognitive complaints.66 Such risks and side effects might be tolerable if the procedure is used to alleviate severe disability. But in order for healthy subjects to volunteer themselves for neurosurgery, there would have to be some very substantial enhancement of normal functionality to be gained.

Futhermore:

enhancement is likely to be far more difficult than therapy. Patients who suffer from paralysis might benefit from an implant that replaces their severed nerves or activates spinal motion pattern generators.67 Patients who are deaf or blind might benefit from artificial cochleae and retinas.68 Patients with Parkinson’s disease or chronic pain might benefit from deep brain stimulation that excites or inhibits activity in a particular area of the brain.69 What seems far more difficult to achieve is a high-bandwidth direct interaction between brain and computer to provide substantial increases in intelligence of a form that could not be more readily attained by other means. Most of the potential benefits that brain implants could provide in healthy subjects could be obtained at far less risk, expense, and inconvenience by using our regular motor and sensory organs to interact with computers located outside of our bodies. We do not need to plug a fiber optic cable into our brains in order to access the Internet.

Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence. Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone. So this limiting case just takes us back to the AI path, which we have already examined.

Thanks for the quotes.

To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. ... One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls.

I was aware that BCI would be dangerous, but I wasn't aware that current BCI, with very limited bandwidth, was already so dangerous. As I said, one could try only interfacing with the surface of the brain - the exact opposite of deep brain stimulation - which is less invasive but does massively reduce options.

Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.

Outgoing bandwidth, OTOH, is only a few bits per second. Better to pick the low-hanging fruit.

Coincidentally, I ran into a paper Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies which seems to claim to extract large amounts of information from the visual cortex via fMRI:

The decoder provides remarkable reconstructions of the viewed movies.

Although I don't know the specifics because of the paywall.

Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence.

As I said, I think taking information out of the brain would happen long before this scenario. But in the more futuristic case of an exocortex, there would still be a period where some parts of the brain can be emulated, but others can't, and so a hybrid system would still be superior.

I noticed that you don't have a green arrow pointing from BCI to WBE or AI in your diagram. It seems like if BCIs make people smarter, that should allow them to do WBE/AI research more effectively. Thoughts?

To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain.

This is all going to change over time. (I don't know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can't get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.

enhancement is likely to be far more difficult than therapy.

Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.

Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded.

The visual pathway is impressive, but it's very limited in the kinds of information it transmits. It's a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn't need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.

[-][anonymous]10y00

Let's not forget that this is fundamentally an economic question and not just a technological one. "The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009." -http://bit.ly/1meroFB (a great study of R&D since WW2). It's true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I'm talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.

And then there are the research benefits. You've already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of "Big Data" is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive - this is where the crux of BMI lies.

Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.

Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the "BCI as a transitional technology" hypothesis.

[This comment is no longer endorsed by its author]Reply

We have already entered the transitional phase of BCI via the keyboard and mouse, and now touchscreen.

I'm not just being a smartass. The momentum is on BCI's side; It's not hard to imagine an externally wearable device that you could query with a thought which would then return an answer to you of a higher quality than the best search or question answering today. Tightening information and feedback loops provides large cognitive boost; surgical methods would just be a bonus.

True - in fact, this has been going on since the invention of the printing press. But I think we've exhausted all the low-hanging fruit here, in that we already have access to all the public domain knowlage of humanity at our fingertips, including crude automatic translations of other languages and tools like Siri or Wolfram alpha.

But it's not easily usable, and really it's only for general-domain knowledge or certain types of broadly available statistics. Consider the difference between having to search on a given topic and having a subject matter expert on that topic on the phone, (especially pretty academic or locally-specific ones that have poorer search results). That's a gap yet to be bridged just by conventional search technology.

Ahh, you're talking about expert systems. I agree that this does hold a lot of potential - in fact in a related tangent I've been spending a lot of time coding some machine learning algorithms, and I can safely say that in their target domain not only are these algos a lot better at inference then I am, but given certain shortcomings that I have not (yet) been able to tackle, the combination of myself and the algos is significantly better than either of us in isolation.

So in a way, I'm already a cyborg, and in this specific case I don't think a simple BCI would improve matter much. A full coding cortex OTOH...

Expert Systems suggests a particular set of ideas and functions, and brings to mind software made int he 1980's that often failed to live up to expectations. I do mean something similar to that, admittedly, but bringing in the best design and information retrieval ideas developed in the 30 years since then.

And yes, when predictions are being made, combining different predictors almost always yields superior results. Another natural "cyborg" area.

Let's not forget that this is fundamentally an economic question and not just a technological one. "The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009." -http://bit.ly/1meroFB (a great study of R&D since WW2). It's true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I'm talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.

And then there are the research benefits. You've already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of "Big Data" is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive - this is where the crux of BMI lies.

Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.

Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the "BCI as a transitional technology" hypothesis.

From a technological standpoint, I would have thought that a hybrid of biological and machine intelligence would be likely to have the best aspects of each

Unfortunately the 'biological' part is not too great at recursive self improvement (at the fundamental level). It's a mess. If we merely wanted to create cyborgs with cognitive advantages then this strategy is a no-brainer (except literally). If we are trying to create a superintelligence then the recursive self improvement feature is more or less obligatory.

Indeed, the mechanical component is far better at recursive self improvement, which is why I wrote:

Of course, in the end it would transition to a fully or mostly machine intelligence, either through offloading increasingly more cognition to the machine components until the organic brains were only a tiny fraction of the mind, or though using the increased intelligence to develop FAI/WBE.

It also occurs that for a 'hive mind' using the enhanced intelligence to acquire the resources to connect more people to the hive mind counts as quantitative, if not qualitative, biological self improvement.

A fairly close analogy is how teams of a competent chessplayer and a laptop chess program can beat both the best humans and computers with far more processing power.

The question is whether that team would more efficient if you give them a high functioning BCI. I'm not sure that's true.

I'd guess it'd make very little difference in ‘regular’ chess but it would help somewhat in bullet chess.

I read somewhere that Kasparov was considering three moves per second while deep blue considers billions. If you consider a move and it takes a few seconds to enter it into a computer, as opposed to being read from your brain, a analysed, and a preliminary evaluation (enough to check that there are no obvious flaws) returning to your brain within milliseconds, then this seems like a several-fold speed up. True, its a quantitative not qualitative speedup, but then this is just a BCI capable of transmitting thoughts consisting of a few bytes.

If you're contemplating picking the book up, do, it's really excellent. Conceptually very dense but worth taking it nice and slowly.

With what software was this done?

[-][anonymous]10y60

I used ClickCharts to make the diagrams.

[-][anonymous]10y20

So, basically, how are the interventions going so far? Are we winning or losing?