There are two things in the past that may be named super-intelligences, if we consider level of tasks they solved. Studying them is useful when we are considering the creation of our own AI.

The first one is biological evolution, which managed to give birth to such a sophisticated thing as man, with its powerful mind and natural languages. The second one is all of human science when considered as a single process, a single hive mind capable of solving such complex problems as sending man to the Moon.

What can we conclude about future computer super-intelligence from studying the available ones?

Goal system. Both super-intelligences are purposeless. They don’t have any final goal which would direct the course of development, but they solve many goals in order to survive in the moment. This is an amazing fact of course.

They also lack a central regulating authority. Of course, the goal of evolution is survival at any given moment, but is a rather technical goal, which is needed for the evolutionary mechanism's realization.

Both will complete a great number of tasks, but no unitary final goal exists. It’s just like a man in their life: values and tasks change, the brain remains.

Consciousness. Evolution lacks it, science has it, but to all appearances, it is of little significance.

That is, there is no center to it, either a perception center or a purpose center. At the same time, all tasks are completed. The sub-conscious part of the human brain works the same way too.

Master algorithm. Both super-intelligences are based on the principle: collaboration of numerous smaller intelligences plus natural selection.

Evolution is impossible without billions of living creatures testing various gene combinations. Each of them solves its own egoistic tasks and does not care about any global purpose. For example, few people think that selection of the best marriage partner is a species evolution tool (assuming that sexual selection is true). Interestingly, the human brain has the same organization: it consists of billions of neurons, but they don’t all see its global task.

Roughly, there have been several million scientists throughout history. Most of them have been solving unrelated problems too, while the least refutable theories passed for selection (considering social mechanisms here).

Safety. Dangerous, but not hostile.

Evolution may experience ecological crises; science creates an atomic bomb. There are hostile agents within both, which have no super-intelligence (e.g. a tiger, a nation state).

Within an intelligent environment, however, a dangerous agent may appear which is stronger than the environment and will “eat it up”. This will be difficult to initiate. Transition from evolution to science was so difficult to initiate from evolution’s point of view, (if it had one).

How to create our super-intelligence. Assume, we agree that super-intelligence is an environment, possessing multiple agents with differing purposes.

So we could create an “aquarium” and put a million differing agents into it. At the top, however, we set an agent to cast tasks into it and then retrieve answers.

Hardware requirements now are very high: we should simulate millions of human-level agents. A computational environment of about 10 to the power of 20 flops is required to simulate a million brains. In general, this is close to the total power of the Internet. It can be implemented as a distributed network, where individual agents are owned by individual human programmers and solve different tasks – something like SETI-home or the Bitcoin network.

Everyone can cast a task into the network, but provides a part of their own resources in return. 

 

Speed of development of superintelligent environment

Hyperbolic law. The Super-intelligence environment develops hyperbolically. Korotaev shows that the human population grows governed by the law N = 1/T (Forrester law , which has singularity at 2026), which is a solution to the following differential equation:

dN/dt = N*N

A solution and more detailed explanation of the equation can be found in this article by Korotaev (article in Russian, and in his English book on p. 23). Notably, the growth rate depends on the second power of the population size. The second power was derived as follows: one N means that a bigger population has more descendants; the second N means that a bigger population provides more inventors who generate a growth in technical progress and resources.

Evolution and tech progress are also known to develop hyperbolically (see below to learn how it connects with the exponential nature of Moore’s law; an exact layout of hyperbolic acceleration throughout history may be found in Panov’s article “Scaling law of the biological evolution and the hypothesis of the self-consistent Galaxy origin of life” ) The expected singularity will occur in the 21st century. And now we know why. Evolution and tech progress are both controlled by the same development law of the superinteligent environment. This law states that the intelligence in an intelligence environment depends on the number of nodes, and on the intelligence of each node. This is of course is very rough estimation, as we should also include the speed of transactions

However, Korotaev gives an equation for population size only, while actually it is also applicable to evolution – the more individuals, the more often that important and interesting mutations occur, and for the number of scientists in the 20th century. (In the 21st century it has reached its plateau already, so now we should probably include the number of AI specialists as nodes).

In short: Korotayev provides a hyperbolic law of acceleration and its derivation from plausible assumptions but it is only applicable to demographics in human history from its beginning and until the middle of the 20t century, when demographics stopped obeying this law. Panov provides data points for all history from the beginning of the universe until the end of the 20th century, and showed that these data points are controlled by hyperbolic law, but he wrote down this law in a different form, that of constantly diminishing intervals between biological and (lately) scientific revolutions. (Each interval is 2.67 shorter that previous one, which implies hyperbolic law.)

What I did here: I suggested that Korotaev’s explanation of hyperbolic law stands as a pre-human history explanation of an accelerated evolutionary process, and that it will work in the 21st century as a law describing the evolution of an AI-agents' environment. It may need some updates if we also include speed of transactions, but it would give even quicker results.

Moore's law is only exponential approximation, it is hyperbolical in the longer term, if seen as the speed of technological development in general. Kurzweil wrote: “But I noticed something else surprising. When I plotted the 49 machines on an exponential graph (where a straight line means exponential growth), I didn’t get a straight line. What I got was another exponential curve. In other words, there’s exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.”

While we now know that Moore's law in hardware has slowed to 2.5 years for each doubling, we will probably now start to see exponential growth in the ability of programs.

Neural net development has a doubling time of around one year or less. Moore's law is like spiral, which circles around more and more intelligent technologies, and it consists of small s-like curves. It all deserves a longer explanation. Here I show that Moore's law, as we know it, is not contradicting the hyperbolic law of acceleration of a superintelligent environment, but this is how we see it on a small scale.

 

Neural networks results: Perplexity

46.8, "one billion word benchmark", v1, 11 Dec 2013
43.8, "one billion word benchmark", v2, 28 Feb 2014
41.3, "skip-gram language modeling", 3 Dec 2014
24.2, "Exploring the limits of language modeling", 7 Feb 2016 http://arxiv.org/abs/1602.02410

 

Child age equivalence in question about a picture:

3 May 2015 —4.45 years old http://arxiv.org/abs/1505.00468
7 November 2015 —5.45 y.o. (за 6 месяцев - на год подросла)http://arxiv.org/abs/1511.02274
4 March 2016 —6.2 y.o. http://arxiv.org/pdf/1603.01417

Material from Sergey Shegurin 


Other considerations

Human level agents and Turing test. Ok, we know that the brain is very complex, and if the power of individual agents in if AI environment grows so quickly, there should appear agents capable of passing a Turing test - and it will happen very soon. But for a long time the nodes of this net will be small companies and personal assistants, which could provide superhuman results. There is already a market place where various projects could exchange results or data using API. As a result, a Turing test will be meaningless, because most powerful agents will be helped by humans.

In any case, some kind of “mind brick”, or universal robotic brain will also appear.

Physical size of Strong AI: if the velocity of light is limited, the super-intelligence must decrease in size rather than increase in order to make quick communications inside itself. Otherwise, the information exchange will slow down, and the development rate will be lost.

Therefore, the super-intelligence should have a small core, e.g. up to the size of the Earth, and even less in the future. The periphery can be huge, but that will perform technical functions – defence and nutrition.

Transition to the next super-intelligent environment. It is logical to suggest that the next super-intelligence will also be an environment rather than a small agent. It will be something like a net of neural net-based agents as well as connected humans. The transition may seem to be soft on a small time scale, but it will be disruptive by it final results. It is already happening: the Internet, AI-agents, open AI, you name it. The important part of such a transition is changing the speed of interaction between agents. In evolution the transaction time was thousands of years, which was the time needed to check new mutations. In science it was months, which was the time needed to publish an article. Now it is limited by the speed of the Internet, which depends not only on the speed of light, but also on its physical size, bandwidth and so on and have transaction time  order of seconds.

So, a new super-intelligence will rise in a rather “ordinary” fashion: The power and number of interacting AI agents will grow, become quicker and they will quickly perform any tasks which are fed to them. (Elsewhere I discussed this and concluded that such a system may evolve into two very large super-intelligent agents which will have a cold war, and that hard take-off of any AI-agent against an AI environment is unlikely. But this does not result in AI safety since war between such two agents will be very destructive – consider nanoweapons. ).

Super-intelligent agents. As the power of individual agents grows, they will reach human and latterly superhuman levels. They may even invest in self-improvement, but if many agents do this simultaneously, it will not give any of them a decisive advantage.

Human safety in the super-intelligent agents environment. There is well known strategy to be safe in the environment there are more powerful than you, and agents fight each other. It is making alliances with some of the agents, or becoming such an agent yourself.

Fourth super-intelligence? Such an AI neural net-distributed super-intelligence may not be the last, if a quicker way of completing transactions between agents is found. Such a way may be an ecosystem containing miniaturization of all agents. (And this may solve the Fermi paradox – any AI evolves to smaller and smaller sizes, and thus makes infinite calculations in final outer time, perhaps using an artificial black  hole as an artificial Tippler Omega point or femtotech in the final stages). John Smart's conclusions are similar:

Singularity: It could still happen around 2030, as was predicted by Forrester law, and the main reason for this is the nature of hyperbolic law and its underlying reasons of the growing number of agents and the IQ of each agent.

Oscillation before singularity: Growth may become more and more unstable as we near singularity because of the rising probability of global catastrophes and other consequences of disruptive technologies. If true, we will never reach singularity dying off shortly before, or oscillating near its “Schwarzschild sphere”, neither extinct, nor able to create a stable strong AI.

The super-intelligent environment still reaches a singularity point, but a point cannot be the environment by definition. Oops. Perhaps an artificial black hole as the ultimate computer would help to solve such a paradox.               

Ways of enhancing the intelligent environment: agent number growth, agent performance speed growth, inter-agent data exchange rate growth, individual agent intelligence growth, and growth in the principles of building agent working organizations.

The main problem of an intelligent environment: chicken or egg? – Who will win: the super-intelligent environment or the super-agent? Any environment can be covered by an agent submitting tasks to it and using its data. On the other hand, if there are at least two super-agents of this kind, they form an environment.

 

Problems with the model:

1)     The model excludes the possibility of black swans and other disruptive events, and assumes continuous and predictable acceleration, even after human level AI is created.

2)     The model is disruptive itself, as it predicts infinity, and in a very short time frame of 15 years from now. But expert consensus puts AI in the 2060-2090 timeframe.

These two problems may somehow cancel each other out.

In the model exists the idea of oscillation before the singularity, which may result in postponing AI and preventing infinity. The singularity point inside the model is itself calculated using remote past points, and if we take into account more recent points, we could get a later date for the singularity, thus saving the model.

If we say that because of catastrophes and unpredictable events the hyperbolic law will slow down and strong AI will be created before 2100, as a result, we could get a more plausible picture.

This may be similar to R.Hanson’s  “ems universe” , but here, neural net-based agents are not equal to human emulations, which play a minor role in all stories.

Limitation of the model: It is only a model, so it will stop working at some point. Reality will surprise us at some point, but reality doesn’t consist only of black swans. Models may work between them.

TL;DR: Science and evolution are super-intelligent environments governed by the same hyperbolic acceleration law, which soon will result in a new super-intelligent environment, consisting of neural net-based agents. Singularity will come after this, possibly as soon as 2030.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 8:45 AM

Capitalism/The Market.

It also has optimization power, but I would not call in superintelligent.

I also would like to mention that there are several other strong optimization processes which we could use as a metaphors for superintelligence.

First of them is human collective unconsciousness, which created all historical human languages (English,Chinese). No one intended to do it, but human hive do it. It may did some other important cultural things, but languages are most clear.

Another one personal human unconsciousness which create human dreams. Its calculating power is enormous, but the purpose of dreaming is not clear.

And the third is anthropic principle, which "created" all observable universe and its laws (using observer selection).

We live in the world full of strong optimisation processes and it is really interesting to look on them.