Note from Malo
The Singularity Institute is always on the lookout for interested and passionate individuals to contribute to our research. As Luke frequently reminds everyone, we've got 2–3 years of papers waiting to be written (see “Forthcoming and Desired Articles on AI Risk”). If you are interested in contributing, I want to hear from you! Get in touch with me at malo@intelligence.org

We wish we could work with everyone who expresses an interest in contributing, but that isn't feasible. To provide a path to becoming a contributor we encouraging individuals to read up on the field, identify an article they think they could work on, and post a ~1000 word outline/preview to the LW community for feedback. If the community reacts positively (based on karma and comments) we'll support the potential contributors' effort to complete the paper and—if all goes well—move forward with an official research relationship (e.g.,Visiting Fellow, Research Fellow or Research Associate).


Hello,

This is my first posting here, so please forgive me if I make any missteps.

The outline draft below draws heavily on Intelligence Explosion: Evidence and Import (Muehlhauser and Salamon 2011?). I will review Stuart Armstrong’s How We're Predicting AI... or Failing to, (Armstrong 2012) for additional content and research areas.

I'm not familiar with the tone and tenor of this community, so I want to be clear about feedback. This is an early draft and as such, nearly all of the content may or may not survive future edits. All constructive feedback is welcome. Subjective opinion is interesting, but unlikely to have an impact unless it opens lines of thought not previously considered.

I'm looking forward to a potentially lively exchange.

Jay

Predicting Machine Super Intelligence

Jacque Swartz

Most Certainly Not Affiliated with Singularity Institute

jaywswartz@gmail.com

Abstract

This paper examines the disciplines, domains, and dimensional aspects of Machine Super Intelligence (MSI) and considers multiple techniques that have the potential to predict the appearance of MSI. Factors that can impact the speed of discovery are reviewed. Then, potential prediction techniques are considered. The concept of MSI is dissected into the currently comprehended components. Then those components are evaluated to indicate their respective state of maturation and the additional behaviors required for MSI. Based on the evaluation of each component, a gap analysis is conducted. The analyses are then assembled in an approximate order of difficulty, based on our current understanding of the complexity of each component. Using this ordering, a collection of indicators is constructed to identify an approximate progression of discoveries that ultimately yield MSI. Finally, a model is constructed that can be updated over time to constantly increase the accuracy of the predicted events, followed by conclusions.

I. Introduction

Predicting the emergence of MSI could potentially be the most important pursuit of humanity. The distinct possibility of an MSI emerging that could harm or exterminate the human race (citation) demands that we create an early warning system. This will give us the opportunity to ensure that the MSI that emerges continues to advance human civilization (citation).

We currently appear to be at some temporal distance from witnessing the creation of MSI (multiple citations). Many factors, such as a rapidly increasing number of research efforts (citation) and motivations for economic gain (citation), clearly indicate that there is a possibility that MSI could appear unexpectedly or even unintentionally (citation).

Some of the indicators that could be used to provide an early warning tool are defined in this paper. The model described at the end of the paper is a potentially viable framework for instrumentation. It should be refined and regularly updated until a more effective tool is created or the appearance of MSI.

This paper draws heavily upon Intelligence Explosion: Evidence and Import (Muehlhauser and Salamon 2011?) and Stuart Armstrong’s How We're Predicting AI... or Failing to, (2012).

This paper presupposes that MSI is generally understood to be equivalent to Artificial General Intelligence (AGI) that has developed the ability to function at levels substantially beyond current human abilities. The latter term will be used throughout the remainder of this paper.

II. Overview

In addition to the fundamental challenge of creating AGI, there are a multitude of theories as to the composition and functionality of a viable AGI. Section three explores the factors that can impact the speed of discovery in general. Individual indicators are explored for unique factors to consider. The factors identified in this section can radically change the pace of discovery.

The fourth section considers potential prediction techniques. Data points and other indicators are identified for each prediction model. The efficacy of the models is examined and developments that increase a model’s accuracy are discussed.

The high degree of complexity of AGI indicates the need to subdivide AGI into its component parts. In the fifth section the core components and functionality required for a potential AGI are established. Each of the components is then examined to determine its current state of development. Then an estimate of the functionality required for an AGI is created as well as recording of any identifiable dependencies. A gap analysis is then performed on the findings to quantify the discoveries required to fill the gap.

This approach does increase the likelihood of prediction error due to the conjunction fallacy, exemplified by research such as the dice selection study (Tversky and Kahneman 1983) and covered in greater detail by Eliezer Yudkowski’s bias research (Yudkowski 2008). Fortunately, the exposure to this bias diminishes as each component matures to its respective usability point and reduces the number of unknown factors.

The sixth section examines the output of the gap analyses for additional dependencies. Then the outputs are assembled in an approximate order of difficulty, based on our current understanding of the complexity of each output. Using this ordering, combined with the dependencies, a collection of indicators with weighting factors is constructed to identify an approximate progression of discoveries that ultimately yield AGI.

Comprehending the indicators, dependencies and rate factors in a model as variables provides a means, however crude, to reflect their impact when they do occur.

In the seventh section, a model is constructed to use the indicators and other inputs to estimate the occurrence of AGI. It is examined for strengths and weaknesses that can be explored to improve the model. Additional enhancements to the model are suggested for exploration.

The eighth and final section includes conclusions and considerations for future research.

III. Rate Modifiers

This section explores the factors that can impact the speed of discovery. Individual indicators are explored for unique factors to consider. While the factors identified in this section can radically change the pace of discovery, comprehending them in the model as variables provides a means to reflect their impact when they do occur.

Decelerators

    Discovery Difficulty

    Disinclination

    Lower Probability Events

       Societal Collapse
       Fraud

    ++

Accelerators

    Improved Hardware

    Better Algorithms

    Massive Datasets

    Progress in Psychology and Neuroscience

    Accelerated Science

    Collaboration

    Crossover

    Economic Pressure

    Final Sprint

    Outliers

    Existing Candidate Maturation

    ++

IV. Prediction Techniques

This section considers potential prediction techniques. Some techniques do not require the indicators above. Most will benefit by considering some or all of the indicators. It is very important to not loose sight of the fact that mankind is inclined to inaccurate probability estimates and overconfidence (Lichtenstein et al. 1992; Yates et al. 2002)

Factors Impacting Accurate Prediction

Prediction Models

    Wisdom of Crowds

    Hardware Extrapolation

    Breakthrough Curve

    Evolutionary Extrapolation

    Machine Intelligence Improvement Curve

    ++

V. Potential AGI Componentry

This section establishes a set of core components and functionality required for a potential AGI. Each of the components is then examined to determine its current state of development as well as any identifiable dependencies. Then an estimate of the functionality required for a AGI is created followed by a gap analysis to quantify the discoveries required to fill the gap.

There are various existing AI implementations as well as AGI concepts currently being investigated. Each one brings in unique elements. The common elements across most include; decision processing, expert systems, pattern recognition and speech/writing recognition. Each of these would include discipline-specific machine learning and search/pre-processing functionality. There also needs to be a general learning function for addition of new disciplines.

Within each discipline there are collections of utility functions. They are the component technologies required to make the higher order discipline efficient and useful. Each of the elements mentioned are areas of specialized study being pursued around the world. They draw from an even larger set of specializations. Due to complexity, in most cases there are second-order, and more, specializations.

Alternative Componentry

There are areas of research that have high potential for inserting new components or substantially modifying the comprehension of the components described.

Specialized Componentry

Robotics and other elements.

Current State

    Decision Processing

    Expert Systems

    Pattern Recognition

    Speech/Writing Recognition

    Machine Learning

        Decision Processing
       Expert Systems
       Pattern Recognition
       Speech/Writing Recognition

    Search/Pre-Processing

       Decision Processing
       Expert Systems
       Pattern Recognition
       Speech/Writing Recognition

Target State

The behaviors required for an AGI to function with acceptable speed and accuracy are not precise. The results of this section are based on a survey of definitions from available research.

    Decision Processing

    Expert Systems

    Pattern Recognition

    Speech/Writing Recognition

    Machine Learning

        Decision Processing
       Expert Systems
       Pattern Recognition
       Speech/Writing Recognition

    Search/Pre-Processing

       Decision Processing
       Expert Systems
       Pattern Recognition
       Speech/Writing Recognition

Dependencies

Gap Analysis

VI. Indicators

The second section examines the output of the gap analyses for additional dependencies. Then the outputs are assembled in an approximate order of difficulty, based on our current understanding of the complexity of each output. Using this ordering, combined with the dependencies, a collection of indicators is constructed to identify an approximate progression of discoveries that ultimately yield an AGI.

Additional Dependencies

Complexity Ranking

Itemized Indicators

VII. Predictive Model

In this section, a model is constructed using the indicators and other inputs to estimate the occurrence of AGI. It is examined for strengths and weaknesses that can be explored to improve the model. Additional enhancements to the model are suggested for exploration.

The Model

Strengths & Weaknesses

Enhancements

VIII. Conclusions

Based on the data and model created above the estimated time frame for the appearance of AGI is from x to y. As noted throughout this paper, the complex nature of AGI and the large number of discoveries and events that need to be quantified using imperfect methodologies, a precise prediction of when AGI will appear is currently impossible.

The model developed in this paper does establish a quantifiable starting point for the creation of an increasingly accurate tool that can be used to continually narrow the margin of error. It also provides a starting set of indicators that can serve as early warning of AGI when discoveries and events are made.

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 12:43 PM

Just wanted to provide some context for this post.

Jay got in touch with SI about a month ago looking to get involved with our research, with the goal of becoming a Research Fellow. I have spent the last month corresponding with him and helping him get up to speed with our research agenda. To demonstrate his research chops, Jay is working on a publication from the “Forthcoming and Desired Articles on AI Risk” list. I asked him to post a ~1000 word preview/outline, as a first step in the process so that he could get some feedback from the community, and get an idea about whether he's on the right track.

SI is alway on the lookout for people who are willing/able to contribute to our research efforts. Working on one of our desired publications is a great way to get started. If you are interesting in doing something similar, please get in touch with me!

[This comment is no longer endorsed by its author]Reply

This comment is no longer relevant now that the article has been prefaced with my note.

From the post:

If the community reacts positively (based on karma and comments) we'll support the potential contributors' effort to complete the paper

I don't think you should put very much weight on the reaction from LW, given that much more polished papers often get low karma. E.g. both my "Responses to Catastrophic AGI Risk: A Survey" and my and Stuart's "How We're Predicting AI — or Failing to" are currently at only 11 upvotes and rather few comments. If even finished papers get that little of a reaction, I would expect that even many drafts that genuinely deserved a great reception would get little to no response.

Kaj,

Thank you. I had noticed that as well. It seems the LW group is focused on a much longer time horizon.

This part:

Jacque Swartz
Singularity Institute

is misleading. Jacque does not currently hold any affiliation with the Singularity Institute. He is not a research fellow, research associate, or visiting fellow. Jacque, please correct this error.

Update: I see this has now been fixed. Thanks!

General Artificial Intelligence (GAI)

Correct to: AGI

GSI

Using nonstandard terminology can be OK, but (1) you need to provide a very good reason and (2) this will become a major point of your paper, though not necessarily the biggest one.

India...China

Doesn't seem to be relevant.

Since the paper is basically about predicting AGI, it might be better to call it a paper about predicting AGI. The "once we have AGI, we will soon after have superintelligence" step is somewhat contentious, and it's counterproductive to introduce contentious points if you're not going to do anything with them.

Thanks for the feedback. I agree on the titling; I started with the title on the desired papers list, so wanted some connection with that. I wasn't sure if there was some distinction I was missing, so proceeded with this approach.

I know it is controversial to say super intelligence will appear quickly. Here again, I wanted some tie to the title. It is a very complex problem to predict AI. To theorize about anything beyond that would distract from the core of the paper.

While even more controversial, my belief is that the first AGI will be a super intelligence in its own right. An AGI will have not have one pair of eyes, but as many as it needs. It will not have just one set of ears, it will immediately be able to listen to many things at once. The most significant aspect is an AGI will immediately be able to hold thousands of concepts in the equivalent of our short term memory, as opposed to the typical 7 or so for humans. This alone will enable it to comprehend immensely complex problems.

Clearly, we don't know how AGI will be implemented or if this type of limit can be imposed on the architecture. I believe an AGI will draw its primary power from data access and logic (i.e., the concurrent concept slots). Bounding an AGI to an approximation of human reasoning is an important step.

This is a major aspect of friendly AI because one of the likely ways to ensure a safe AI is to find a means to purposely limit the number of concurrent concept slots to 7. Refining an AGI of this power into something friendly to humans could be possible before the limit is removed, by us or it.

I just wanted to express some thoughts here. I do not intent to cover this in the paper as it is a topic for several focused papers to explore.

There are some good ideas in this.

The paper needs focus. One possibility is the technique described in the abstract "The concept of MSI is dissected..... a model is constructed..." Is there a specific formal technique that you are going to use?

Another possibility is a review of prediction techniques, with an attempt to apply each one to full AI, or references that do so. Sotala and Armstrong surveyed predicted dates to AI; you could survey the different techniques one could use or which have been used.

It seems that the section of the abstract that analyzes accelerated change ("Rate modifiers") could be omitted as off-topic to either of the two possibilities above. Given what appears to be the main topic, I would suggest keeping the review of the AI risk short; and not going into too much detail into specific technologies like AIXI or the Goedel machine. I am not too sure about the componentry section, given that we have no idea what components might be needed.

Joshua,

Thank you for the feedback.

I do need to increase the emphasis on the focus, which is the first premise you mentioned. I did not do that in this draft with the intent of eliciting feedback on the viability and interest in the model concept.

I will use formal techniques, which one(s) I have not yet settled on. At the moment, I am leaning to the processes around use case development to decompose current AI models into the componentry. For the weighting and gap calculations some statistical methods should help.

I am mulling over Bill Hibbard's 2012 AGI papers, "Avoiding Unintended AI Behaviors" and "Decision Support for Safe AI Design" http://www.ssec.wisc.edu/~billh/g/mi.html as well as some PIBEA findings, e.g., http://www.cs.umb.edu/~jxs/pub/cec11-prospect.pdf to use as a framework for the component model. The Pareto front element is particularly interesting when considered with graph theory.

I am considering how the rate modifiers can be incorporated into the predictive model. This will help to identify what events for the community to look for and how a rate modifier occurrence in one area, e.g., pattern recognition, impacts other aspects of the model. We clearly do not know all of the components, but we do know the major disciplines that will contribute. As noted, the model will be extensible to allow discoveries to be incorporated, increasing the accuracy.

The general idea is to establish a predictive model with assumed margins of error and functionality. To put a formalized "stick in the ground" from which improvements are made. If maintained and enhanced with discoveries the margin of error will continue to decline and confidence levels will increase. Such a model also provides context for research and identifies potential areas of study.

One potential aspect of the model is to identify aspects of research that may be obviated. If a requirement is consistently satisfied through unexpected methods, it can be removed from consideration in the area where it was originally conceived. This also has the potential to provide insights to the original space.