The map of future models
TL;DR: Many models of the future exist. Several are relevant. Hyperbolic model is strongest, but too strange.
Our need: correct model of the future
Different people: different models = no communication.
Assumptions:
Model of the future = main driving force of historical process + graphic of changes
Model of the future determines global risks
The map: lists all main future models.
Structure: from fast growth – to slow growth models.
Pfd: http://immortality-roadmap.com/futuremodelseng.pdf
Does immortality imply eternal existence in linear time?
The question is important, as it’s often used as an argument against idea of immortality, on the level of desirability as well as feasibility. It may result in less interest in radical life extension as "result will be the same", we will die. Religion, on the other hand is not afraid to "sell" immortality, as it has God, who will solve all contradiction in immortality implementation. As a result, religion win on the market of ideas.
Immortality (by definition) is about not dying. The fact of eternal linear existence follows from it, seems to be very simple and obvious theorem:
“If I do not die in the time moment N and N+1, I will exist for any N”.
If we prove that immortality is impossible, then any life would look like: Now + unknown very long time + death. So, death is inevitable, and the only difference is the unknown time until it happens.
It is an unpleasant perspective, by the way.
So we have or “bad infinity”, or inevitable death. Both look unappealing. Both also look logically contradictory. "Infinite linear existence" requires infinite memory of observer, for example. "Death of observer" is also implies an idea of the ending of stream of experiences, which can't be proved empirically, and from logical point of view is unproved hypothesis.
But we can change our point of view if we abandon the idea of linear time.
Physics suggests that near black holes closed time-like curves could be possible. https://en.wikipedia.org/wiki/Closed_timelike_curve (Idea of "Eternal recurrence" of Nietzsche is an example of such circle immortality.)
If I am in such a curve, my experiences may recur after, say, one billion years. In this case, I am immortal but have finite time duration.
It may be not very good, but it is just a starting point in considerations that would help lead us away from the linear time model.
There may be other configurations in non-linear time. Another obvious one is the merging of different personal timelines.
Another is the circular attractor.
Another is a combination of attractors, merges and circular timelines, which may result in complex geometry.
Another is 2 (or many)- dimensional time, with another perpendicular time arrow. It results in a time topology. Time could also include singularities, in which one has an infinite number of experiences in finite time.
We could also add here idea of splitting time in quantum multiverse.
We could also add an idea that there is a possible path between any two observer-moment, and given that infinitely many such paths exist in splitting multiverse, any observer has non zero probability to become any other observer, which results in tangle of time-like curves in the space of all possible minds.
Timeless physics ideas also give us another view on idea of “time” in which we don’t have “infinite time”, but not because infinity is impossible, but because there is no such thing as time.
TL;DR: The idea of time is so complex that we can’t state that immortality results in eternal linear existence. These two ideas may be true or false independently.
Also I have a question to the readers: If you think that superintelligence will be created, do you think it will be immortal, and why?
Two super-intelligences (evolution and science) already exist: what could we learn from them in terms of AI's future and safety?
There are two things in the past that may be named super-intelligences, if we consider level of tasks they solved. Studying them is useful when we are considering the creation of our own AI.
The first one is biological evolution, which managed to give birth to such a sophisticated thing as man, with its powerful mind and natural languages. The second one is all of human science when considered as a single process, a single hive mind capable of solving such complex problems as sending man to the Moon.
What can we conclude about future computer super-intelligence from studying the available ones?
Goal system. Both super-intelligences are purposeless. They don’t have any final goal which would direct the course of development, but they solve many goals in order to survive in the moment. This is an amazing fact of course.
They also lack a central regulating authority. Of course, the goal of evolution is survival at any given moment, but is a rather technical goal, which is needed for the evolutionary mechanism's realization.
Both will complete a great number of tasks, but no unitary final goal exists. It’s just like a man in their life: values and tasks change, the brain remains.
Consciousness. Evolution lacks it, science has it, but to all appearances, it is of little significance.
That is, there is no center to it, either a perception center or a purpose center. At the same time, all tasks are completed. The sub-conscious part of the human brain works the same way too.
Master algorithm. Both super-intelligences are based on the principle: collaboration of numerous smaller intelligences plus natural selection.
Evolution is impossible without billions of living creatures testing various gene combinations. Each of them solves its own egoistic tasks and does not care about any global purpose. For example, few people think that selection of the best marriage partner is a species evolution tool (assuming that sexual selection is true). Interestingly, the human brain has the same organization: it consists of billions of neurons, but they don’t all see its global task.
Roughly, there have been several million scientists throughout history. Most of them have been solving unrelated problems too, while the least refutable theories passed for selection (considering social mechanisms here).
Safety. Dangerous, but not hostile.
Evolution may experience ecological crises; science creates an atomic bomb. There are hostile agents within both, which have no super-intelligence (e.g. a tiger, a nation state).
Within an intelligent environment, however, a dangerous agent may appear which is stronger than the environment and will “eat it up”. This will be difficult to initiate. Transition from evolution to science was so difficult to initiate from evolution’s point of view, (if it had one).
How to create our super-intelligence. Assume, we agree that super-intelligence is an environment, possessing multiple agents with differing purposes.
So we could create an “aquarium” and put a million differing agents into it. At the top, however, we set an agent to cast tasks into it and then retrieve answers.
Hardware requirements now are very high: we should simulate millions of human-level agents. A computational environment of about 10 to the power of 20 flops is required to simulate a million brains. In general, this is close to the total power of the Internet. It can be implemented as a distributed network, where individual agents are owned by individual human programmers and solve different tasks – something like SETI-home or the Bitcoin network.
Everyone can cast a task into the network, but provides a part of their own resources in return.
Speed of development of superintelligent environment
Hyperbolic law. The Super-intelligence environment develops hyperbolically. Korotaev shows that the human population grows governed by the law N = 1/T (Forrester law , which has singularity at 2026), which is a solution to the following differential equation:
dN/dt = N*N
A solution and more detailed explanation of the equation can be found in this article by Korotaev (article in Russian, and in his English book on p. 23). Notably, the growth rate depends on the second power of the population size. The second power was derived as follows: one N means that a bigger population has more descendants; the second N means that a bigger population provides more inventors who generate a growth in technical progress and resources.
Evolution and tech progress are also known to develop hyperbolically (see below to learn how it connects with the exponential nature of Moore’s law; an exact layout of hyperbolic acceleration throughout history may be found in Panov’s article “Scaling law of the biological evolution and the hypothesis of the self-consistent Galaxy origin of life” ) The expected singularity will occur in the 21st century. And now we know why. Evolution and tech progress are both controlled by the same development law of the superinteligent environment. This law states that the intelligence in an intelligence environment depends on the number of nodes, and on the intelligence of each node. This is of course is very rough estimation, as we should also include the speed of transactions
However, Korotaev gives an equation for population size only, while actually it is also applicable to evolution – the more individuals, the more often that important and interesting mutations occur, and for the number of scientists in the 20th century. (In the 21st century it has reached its plateau already, so now we should probably include the number of AI specialists as nodes).
In short: Korotayev provides a hyperbolic law of acceleration and its derivation from plausible assumptions but it is only applicable to demographics in human history from its beginning and until the middle of the 20t century, when demographics stopped obeying this law. Panov provides data points for all history from the beginning of the universe until the end of the 20th century, and showed that these data points are controlled by hyperbolic law, but he wrote down this law in a different form, that of constantly diminishing intervals between biological and (lately) scientific revolutions. (Each interval is 2.67 shorter that previous one, which implies hyperbolic law.)
What I did here: I suggested that Korotaev’s explanation of hyperbolic law stands as a pre-human history explanation of an accelerated evolutionary process, and that it will work in the 21st century as a law describing the evolution of an AI-agents' environment. It may need some updates if we also include speed of transactions, but it would give even quicker results.
Moore's law is only exponential approximation, it is hyperbolical in the longer term, if seen as the speed of technological development in general. Kurzweil wrote: “But I noticed something else surprising. When I plotted the 49 machines on an exponential graph (where a straight line means exponential growth), I didn’t get a straight line. What I got was another exponential curve. In other words, there’s exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.”
While we now know that Moore's law in hardware has slowed to 2.5 years for each doubling, we will probably now start to see exponential growth in the ability of programs.
Neural net development has a doubling time of around one year or less. Moore's law is like spiral, which circles around more and more intelligent technologies, and it consists of small s-like curves. It all deserves a longer explanation. Here I show that Moore's law, as we know it, is not contradicting the hyperbolic law of acceleration of a superintelligent environment, but this is how we see it on a small scale.
Neural networks results: Perplexity
46.8, "one billion word benchmark", v1, 11 Dec 2013
43.8, "one billion word benchmark", v2, 28 Feb 2014
41.3, "skip-gram language modeling", 3 Dec 2014
24.2, "Exploring the limits of language modeling", 7 Feb 2016 http://arxiv.org/abs/1602.02410
Child age equivalence in question about a picture:
3 May 2015 —4.45 years old http://arxiv.org/abs/1505.00468
7 November 2015 —5.45 y.o. (за 6 месяцев - на год подросла)http://arxiv.org/abs/1511.02274
4 March 2016 —6.2 y.o. http://arxiv.org/pdf/1603.01417
Material from Sergey Shegurin
Other considerations
Human level agents and Turing test. Ok, we know that the brain is very complex, and if the power of individual agents in if AI environment grows so quickly, there should appear agents capable of passing a Turing test - and it will happen very soon. But for a long time the nodes of this net will be small companies and personal assistants, which could provide superhuman results. There is already a market place where various projects could exchange results or data using API. As a result, a Turing test will be meaningless, because most powerful agents will be helped by humans.
In any case, some kind of “mind brick”, or universal robotic brain will also appear.
Physical size of Strong AI: if the velocity of light is limited, the super-intelligence must decrease in size rather than increase in order to make quick communications inside itself. Otherwise, the information exchange will slow down, and the development rate will be lost.
Therefore, the super-intelligence should have a small core, e.g. up to the size of the Earth, and even less in the future. The periphery can be huge, but that will perform technical functions – defence and nutrition.
Transition to the next super-intelligent environment. It is logical to suggest that the next super-intelligence will also be an environment rather than a small agent. It will be something like a net of neural net-based agents as well as connected humans. The transition may seem to be soft on a small time scale, but it will be disruptive by it final results. It is already happening: the Internet, AI-agents, open AI, you name it. The important part of such a transition is changing the speed of interaction between agents. In evolution the transaction time was thousands of years, which was the time needed to check new mutations. In science it was months, which was the time needed to publish an article. Now it is limited by the speed of the Internet, which depends not only on the speed of light, but also on its physical size, bandwidth and so on and have transaction time order of seconds.
So, a new super-intelligence will rise in a rather “ordinary” fashion: The power and number of interacting AI agents will grow, become quicker and they will quickly perform any tasks which are fed to them. (Elsewhere I discussed this and concluded that such a system may evolve into two very large super-intelligent agents which will have a cold war, and that hard take-off of any AI-agent against an AI environment is unlikely. But this does not result in AI safety since war between such two agents will be very destructive – consider nanoweapons. ).
Super-intelligent agents. As the power of individual agents grows, they will reach human and latterly superhuman levels. They may even invest in self-improvement, but if many agents do this simultaneously, it will not give any of them a decisive advantage.
Human safety in the super-intelligent agents environment. There is well known strategy to be safe in the environment there are more powerful than you, and agents fight each other. It is making alliances with some of the agents, or becoming such an agent yourself.
Fourth super-intelligence? Such an AI neural net-distributed super-intelligence may not be the last, if a quicker way of completing transactions between agents is found. Such a way may be an ecosystem containing miniaturization of all agents. (And this may solve the Fermi paradox – any AI evolves to smaller and smaller sizes, and thus makes infinite calculations in final outer time, perhaps using an artificial black hole as an artificial Tippler Omega point or femtotech in the final stages). John Smart's conclusions are similar:
Singularity: It could still happen around 2030, as was predicted by Forrester law, and the main reason for this is the nature of hyperbolic law and its underlying reasons of the growing number of agents and the IQ of each agent.
Oscillation before singularity: Growth may become more and more unstable as we near singularity because of the rising probability of global catastrophes and other consequences of disruptive technologies. If true, we will never reach singularity dying off shortly before, or oscillating near its “Schwarzschild sphere”, neither extinct, nor able to create a stable strong AI.
The super-intelligent environment still reaches a singularity point, but a point cannot be the environment by definition. Oops. Perhaps an artificial black hole as the ultimate computer would help to solve such a paradox.
Ways of enhancing the intelligent environment: agent number growth, agent performance speed growth, inter-agent data exchange rate growth, individual agent intelligence growth, and growth in the principles of building agent working organizations.
The main problem of an intelligent environment: chicken or egg? – Who will win: the super-intelligent environment or the super-agent? Any environment can be covered by an agent submitting tasks to it and using its data. On the other hand, if there are at least two super-agents of this kind, they form an environment.
Problems with the model:
1) The model excludes the possibility of black swans and other disruptive events, and assumes continuous and predictable acceleration, even after human level AI is created.
2) The model is disruptive itself, as it predicts infinity, and in a very short time frame of 15 years from now. But expert consensus puts AI in the 2060-2090 timeframe.
These two problems may somehow cancel each other out.
In the model exists the idea of oscillation before the singularity, which may result in postponing AI and preventing infinity. The singularity point inside the model is itself calculated using remote past points, and if we take into account more recent points, we could get a later date for the singularity, thus saving the model.
If we say that because of catastrophes and unpredictable events the hyperbolic law will slow down and strong AI will be created before 2100, as a result, we could get a more plausible picture.
This may be similar to R.Hanson’s “ems universe” , but here, neural net-based agents are not equal to human emulations, which play a minor role in all stories.
Limitation of the model: It is only a model, so it will stop working at some point. Reality will surprise us at some point, but reality doesn’t consist only of black swans. Models may work between them.
TL;DR: Science and evolution are super-intelligent environments governed by the same hyperbolic acceleration law, which soon will result in a new super-intelligent environment, consisting of neural net-based agents. Singularity will come after this, possibly as soon as 2030.
The map of nanotech global catastrophic risks
Nanotech seems to be smaller risk than AI or biotech, but it advanced form has many ways of omnicide. Nanotech will be probably created after strong biotech, but short before strong AI (or by AI), so the period of vulnerability is rather short. Anyway nanotech has different stages it its future development, mostly dependent on its level of miniaturisation and ability to replicate. To control it in the future will be build some kind of protection shield which may have its own failure modes.
The main reading about the risk is Freitas's article "Some limits to global ecophagy by biovorous nanoreplicators" and "Nanoshield".
Some integration between bio and nanotech has already started in the form of DNA-origami. So may be first nanobots will be bionanobots, like upgraded version of E.coli.
Pdf is here: http://immortality-roadmap.com/nanorisk.pdf

The map of global catastrophic risks connected with biological weapons and genetic engineering
TL;DR: Biorisks could result in extinction because of multipandemic in near future and their risks is the same order magnitude as risks of UFAI. A lot of biorisks exist, they are cheap and could happen soon.
It may be surprising that number of published research about risks of biological global catastrophe is much less than number of papers about risks of self-improving AI. (One of exception here is "Strategic terrorism” research parer by former chief technology officer of Microsoft.)
It can’t be explain by the fact that biorisks have smaller probability (it will not be known until Bostrom will write the book “Supervirus”). I mean we don’t know it until a lot of research will be done.
Also biorisks are closer in time than AI risks and because of it they shadow AI risks, lowering the probability that extinction will happen by means of UFAI, because it could happen before it by means of bioweapons (e.g. if UFAI risk is 0.9, but chances that we will die before its creation from bioweapons is 0.8, than actual AI risk is 0.18). So studying biorisks may be more urgent than AI risks.
There is no technical problem to create new flu virus that could kill large part of human population. And the idea of multi pandemic - that it the possibility to release 100 different agents simultaneously - tells us that biorisk could have arbitrary high global lethality. Most of bad things from this map may be created in next 5-10 years, and no improbable insights are needed. Biorisks are also very cheap in production and small civic or personal biolab could be used to create them.
May be research in estimation probability of human extinction by biorisks had been done secretly? I am sure that a lot of analysis of biorisks exist in secret. But this means that they do not exist in public and scientists from other domains of knowledge can’t independently verify them and incorporate into broader picture of risks. The secrecy here may be useful if it concerns concrete facts about how to crete a dangerous virus. (I was surprised by effectiveness with which Ebola epidemic was stopped after the decision to do so was made, so maybe I should not underestimate government knowledge on the topic).
I had concerns if I should publish this map. I am not a biologist and chances that I will find really dangerous information are small. But what if I inspire bioterrorists to create bioweapons? Anyway we have a lot of movies with such inspiration.
So I self-censored one idea that may be too dangerous to publish and put black box instead. I also have a section of prevention methods in the lower part of the map. All ideas in the map may be found in wikipedia or other open sources.
The goal of this map is to show importance of risks connected with new kinds of biological weapons which could be created if all recent advances in bioscience will be used for bad. The map shows what we should be afraid off and try to control. So it is map of possible future development of the field of biorisks.
Not any biocatastrophe will result in extinction, it is in the fat tail of the distribution. But smaller catastrophes may delay other good things and wider our window of vulnerability. If protecting measures will be developed on the same speed as possible risks we are mostly safe. If total morality of bioscientists is high we are most likely safe too - no one will make dangerous experiments.
Timeline: Biorisks are growing at least exponentially with the speed of Moore law in biology. After AI will be created and used to for global government and control, biorisks will probably ended. This means that last years before AI creation will be most dangerous from the point of biorisks.
The first part of the map presents biological organisms that could be genetically edited for global lethality and each box presents one scenario of a global catastrophe. While many boxes are similar to existing bioweapons, they are not the same as not much known bioweapons could result in large scale pandemic (except smallpox and flu). Most probable biorisks are outlined in red in the map. And the real one will be probably not from the map as the world bio is very large and I can’t cover it all.
The map is provided with links which are clickable in the pdf, which is here: http://immortality-roadmap.com/biorisk.pdf

AI safety in the age of neural networks and Stanislaw Lem 1959 prediction
Tl;DR: Neural networks will result in slow takeoff and arm race between two AIs. It has some good and bad consequences to the problem of AI safety. Hard takeoff may happen after it anyway.
Summary: Neural networks based AI can be built; it will be relatively safe, not for a long time though.
The neuro AI era (since 2012) feature an exponential growth of the total AI expertise, with a doubling period of about 1 year, mainly due to data exchange among diverse agents and different processing methods. It will probably last for about 10 to 20 years, after that, hard takeoff of strong AI or creation of Singleton based on integration of different AI systems can take place.
Neural networks based AI implies slow takeoff, which can take years and eventually lead to AI’s evolutionary integration into the human society. A similar scenario was described by Stanisław Lem in 1959: the arms race between countries would cause power race between AIs. The race is only possible if the self-enhancement rate is rather slow and there is data interchange between the systems. The slow takeoff will result in a world system with two competitive AI-countries. Its major risk will be a war between AIs and corrosion of value system of competing AIs.
The hard takeoff implies revolutionary changes within days or weeks. The slow takeoff can transform into the hard takeoff at some stage. The hard takeoff is only possible if one AI considerably surpasses its peers (OpenAI project wants to prevent it).
Part 1. Limitations of explosive potential of neural nets
Everyday now we hear about success of neural networks, and we could conclude that human level AI is near the corner. But such type of AI is not fit for explosive self-improvement.
If AI is based on neural net, it is not easy for it to undergo quick self-improvement for several reasons:
1. A neuronet’s executable code is not fully transparent because of theoretical reasons, as knowledge is not explicitly present within it. So even if one can read neuron weight values, it’s not easy to understand how they can be changed to improve something.
2. Educating a new neural network is a resource-consuming task. If a neuro AI decides to go the way of self-enhancement, but is unable to understand its source code, a logical solution would be to ‘deliver a child’, i.e. to teach a new neural network. However, educating neural networks requires much more resources than their executing; it requires huge databases and has high failure probability. All those factors will lead to rather slow AI self-enhancement.
3. Neural network education depends on big data volumes and new ideas coming from the external world. It means that a single AI will hardly break away, if it has stopped free information exchange with the external world; its level will not surpass the rest of the world considerably.
4. The neural network power has relatively linear dependence on the power of the computer it’s run on, so with a neuro AI, the hardware power is limiting to its self-enhancement ability.
5. Neuro AI would be a rather big program of about 1 TByte, so it can hardly leak into the network unnoticed (at current internet speeds).
6. Even if a neuro AI reaches the human level, it will not get self-enhancement ability (because no one person can understand all scientific aspects). For this end, a big lab with numerous experts in different branches is needed. Additionally, it should be able to launch such virtual laboratory at a rate at least 10 -100 times higher than that of a human being to get an edge as compared to the rest of mankind. That is, it has to be as powerful as 10,000 people or more to surpass the rest part of the mankind in terms of enhancement rate. This is a very high requirement. As a result, the neural net era can lead to building a human, or even a bit superhuman level AI, which is unable to self-enhance or does it so slowly that lags behind the technical progress.
The civilization-level intelligence is the total IQ that the civilization possesses for 100 years of its history, which is defined as a complexity of scientific and engineering tasks it can solve. For example, during the 20th century, nuclear weapon was created, but problems of cancer, aging and AI creation failed to be solved. It means, those tasks have superior complexity.
For a strong AI to be able to change the human destiny, its IQ should 100 – 1000 times surpass that of the entire civilization. In this case, it can create a remedy against aging, treat cancer etc. within a year. (To destroy mankind, it would need a much lower level of intelligence like that of a sophisticated computer virus, which can,
7. The richest neural net project will get decisive advantage (or not?). Neuronets based AI’s power is approximately proportional to the power of computers it’s run on, time, and a size of the base it was educated on, and number of researchers in the project. The project from the basement may only succeed if it has superior master algorithm of intelligence. That said, small project can hardly experience exponential self-improvement, because the first AI will already have had better computing power and, more important, better databases and researchers. And as now we live in AI-summer period and there is a lot of hype about AI, big companies will invest to buy the best ideas about master algorithm (by buying startups).
Objection here is that now everybody who spent 1 year in machine learning and has GPU could reach meaningful results in creation his own RNN. So may be collective intelligence of private researchers may overweight big companies.
A detailed roadmap of AI failure modes can be found here; and the roadmap of the ways of creating a safe AI here.
See also machine learning expert opinion and recent FHI post.
Part 2. Some ideas about ensuring safety of neural nets based AI
The mentioned concepts can be helpful in creating extra safety measures (while not absolute, they can reduce chances of AI getting out of control.)
1. Program memory access restriction. In state-of-the-art computers, a program code and data share the same memory, which is known as von Neumann architecture. However, there is also Harvard architecture, where data and programs are stored separately. https://en.wikipedia.org/wiki/Harvard_architecture
This kind of computer can be physically produced and provided with source-code reading attempt indicators. (However, there are neural networks at the moment, which generate a new neural network for every task, so they have higher efficiency, and this approach will not work for them or limit AI power).
2. A compiled executable code and its source written in a high-level programming language should be stored separately. If AI tries to reach it, it will be a signal that it is getting out of control. With neural networks, a compiled code, its source, and a database used to educate the neural network should be stored separately.
3. AI should be limited in knowledge useful to build AI. To improve itself, AI should have very precise understanding of its internal construction (the same applies to a human being). It enables us to keep it under control: if not provided with this data, it will have problems with self-improvement. That is AI, which is not an AI expert, is safer to some extent.
4. We can use “foggy architecture”, which is very difficult for optimizing because the AI’s values and major features are hidden among heaps of other details. Neural networks are the architecture of this kind. But it will make it also difficult to us to put our values in it.
5. We can physically restrict the ability of the AI to create “AI children”; their bringing-up requires huge resources, time, and data prior they become more effective than their parents. AI can create another AI, but then it will need to test it for a long time to find out if the latter observes the parent’s values, if it has any mistakes, and if it is really superior to the original version. This kind of testing is time-consuming, generally nontrivial, and even incomputable. Especially, this kind of task is difficult for neural networks which contain a huge amount of random elements, are not exact in principle, and their faultlessness can not be proven. (Moreover, there are numerous examples when even a small image shift will cause a recognition error.) However, this problem can be solved because this is the way that the human society goes: children are born, tested, and educated.
Creation of AI-children may be restricted by restricting needed infrastructure: memory, datasets, processing power.
6. A neuronet-based agent (like a robot) will be anthropomorphous in terms of its brain structure. Even if we shall not imitate the human brain intentionally, we shall get approximately the same thing. In a sense, it’s may be good as even if these AIs supplant people, they still will be almost people who are different from normal people like one generation from another. And being anthropomorphous they may be more compaterble with human value systems. Along with that, there may exist absolutely humanless AI architecture types (for example, if evolution is regarded as an inventor.)
But neural net world will be not EM-dominated world of Hanson. EM-world may appear on later stage, but I think that exact uploads still will not be dominating form of AI.
Part 3. Transition from slow to hard takeoff
In a sense, neuronet-based AI is like a chemical fuel rocket: they do fly and can fly even across the entire solar system, but they are limited in terms of their development potential, bulky, and clumsy.
Sooner or later, using the same principle or another one, completely different AI can be built, which will be less resource-consuming and faster in terms of self-improvement ability.
If a certain superagent will be built, which can create neural networks, but is not a neural network itself, it can be of a rather small size and, partly due to this, experience faster evolution. Neural networks have rather poor intelligence per code concentration. Probably, the same thing could be done in a more optimum way by reducing its size by an order of magnitude, for example, by creating a program to analyze an already educated neural network and get all necessary information from it.
When, in 10 – 20 years, hardware will improve, multiple neuronets will be able to evolve within the same computer simultaneously or be transmitted via the Internet, which will boost their development.
Smart neuro AI can analyze all available data analysis methods and create new AI architecture able to speed up faster.
Launch of quantum-computer-based networks can boost their optimization drastically.
There are many other promising AI directions which did not pop up yet: Bayesian networks, genetic algorithms.
The neuro AI era will feature exponential growth of the total humanity intelligence, with a doubling period of about 1 year, mainly due to the data exchange among diverse agents and different processing methods. It will last for about 10 to 20 years (2025-2035) and, after that, hard take-off of strong AI can take place.
That is, the slow take-off period will be the period of collective evolution of both computer science and mankind, which will enable us to adapt to changes under way and adjust them.
Just like there are Mac and PC in the computer world or democrats and republicans in politics, it is likely that two big competing AI systems will appear (plus, ecology consisting of smaller ones). It could be Google and Facebook or USA and China, depending on whether the world will choose the way of economical competition or military opposition. That is, the slow take-off hinders the world consolidation under the single control, but rather promotes a bipolar model. While a bipolar system can remain stable for a long period of time, there are always risks of a real war between the AIs (see Lem’s quote below).
Part 4. In the course of the slow takeoff, AI will go through several stages, that we can figure out now
While the stages can be passed rather fast or be diluted, we still can track them like milestones. The dates are only estimates.
1. AI autopilot. Tesla has it already.
2. AI home robot. All prerequisites are available to build it by 2020 maximum. This robot will be able to understand and fulfill an order like ‘Bring my slippers from the other room’. On its basis, something like “mind-brick” may be created, which is a universal robot brain able to navigate in natural space and recognize speech. Then, this mind-brick can be used to create more sophisticated systems.
3. AI intellectual assistant. Searching through personal documentation, possibility to ask questions in a natural language and receive wise answers. 2020-2030.
4. AI human model. Very vague as yet. Could be realized by means of a robot brain adaptation. Will be able to simulate 99% of usual human behavior, probably, except for solving problems of consciousness, complicated creative tasks, and generating innovations. 2030.
5. AI as powerful as an entire research institution and able to create scientific knowledge and get self-upgraded. Can be made of numerous human models. 100 simulated people, each working 100 times faster than a human being, will be probably able to create AI capable to get self-improved faster, than humans in other laboratories can do it. 2030-2100
5a Self-improving threshold. AI becomes able to self-improve independently and quicker than all humanity
5b Consciousness and qualia threshold. AI is able not only pass Turing test in all cases, but has experiences and has understanding why and what it is.
6. Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind. 2040-2100
7. AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will be able to solve problems of aging, cancer, solar system exploration, nanorobots building, and radical improvement of life of all people. 2050-2100
8. Jupiter brain – huge AI using the entire planet’s mass for calculations. It can reconstruct dead people, create complex simulations of the past, and dispatch von Neumann probes. 2100-3000
9. Galactic kardashov level 3 AI. Several million years from now.
10. All-Universe AI. Several billion years from now
Part 5. Stanisław Lem on AI, 1959, Investigation
In his novel «Investigation» Lem's character discusses future of arm race and AI:
-------
- Well, it was somewhere in 46th, A nuclear race had started. I knew that when the limit would be reached (I mean maximum destruction power), development of vehicles to transport the bomb would start. .. I mean missiles. And here is where the limit would be reached, that is both parts would have nuclear warhead missiles at their disposal. And there would arise desks with notorious buttons thoroughly hidden somewhere. Once the button is pressed, missiles take off. Within about 20 minutes, finis mundi ambilateralis comes - the mutual end of the world. <…> Those were only prerequisites. Once started, the arms race can’t stop, you see? It must go on. When one part invents a powerful gun, the other responds by creating a harder armor. Only a collision, a war is the limit. While this situation means finis mundi, the race must go on. The acceleration, once applied, enslaves people. But let’s assume they have reached the limit. What remains? The brain. Command staff’s brain. Human brain can not be improved, so some automation should be taken on in this field as well. The next stage is an automated headquarters or strategic computers. And here is where an extremely interesting problem arises. Namely, two problems in parallel. Mac Cat has drawn my attention to it. Firstly, is there any limit for development of this kind of brain? It is similar to chess-playing devices. A device, which is able to foresee the opponent’s actions ten moves in advance, always wins against the one, which foresees eight or nine moves ahead. The deeper the foresight, the more perfect the brain is. This is the first thing. <…> Creation of devices of increasingly bigger volume for strategic solutions means, regardless of whether we want it or not, the necessity to increase the amount of data put into the brain, It in turn means increasing dominating of those devices over mass processes within a society. The brain can decide that the notorious button should be placed otherwise or that the production of a certain sort of steel should be increased – and will request loans for the purpose. If the brain like this has been created, one should submit to it. If a parliament starts discussing whether the loans are to be issued, the time delay will occur. The same minute, the counterpart can gain the lead. Abolition of parliament decisions is inevitable in the future. The human control over solutions of the electronic brain will be narrowing as the latter will concentrate knowledge. Is it clear? On both sides of the ocean, two continuously growing brains appear. What is the first demand of a brain like this, when, in the middle of an accelerating arms race, the next step will be needed? <…> The first demand is to increase it – the brain itself! All the rest is derivative.
- In a word, your forecast is that the earth will become a chessboard, and we – the pawns to be played by two mechanical players during the eternal game?
Sisse’s face was radiant with proud.
- Yes. But this is not a forecast. I just make conclusions. The first stage of a preparatory process is coming to the end; the acceleration grows. I know, all this sounds unlikely. But this is the reality. It really exists!
— <…> And in this connection, what did you offer at that time?
- Agreement at any price. While it sounds strange, but the ruin is a less evil than the chess game. This is awful, lack of illusions, you know.
-----
Part 6. The primary question is: Will strong AI be built during our lifetime?
That is, is this a question of future generations’ good (the question that an efficient altruist, not a common person, is concerned about) or a question of my near term planning?
If AI will be built during my lifetime, it may lead to either the radical life extension by means of different technologies and realization of all sorts of good things not to be numbered here or my death and probably pain, if this AI is unfriendly.
It depends on the time when AI is built and my expected lifetime (with the account for the life extension to be obtained from weaker AI versions and scientific progress on one hand, and its reduction due to global risks irrelevant to AI.)
Note that we should consider different dates for different events. If we would like to avoid AI risks, we should take the earliest date of its possible appearance (for example, the first 10%). And if we count on its good, then – the median.
Since the moment of neuro-revolution, an approximate rate of doubling AI algorithms efficiency (mainly in image recognition area) is about 1 year. It is difficult to quantify this process as the task complexity does not change linearly, and it is always more difficult to recognize recent patterns.
Now, an important factor is a radical change in attitude towards AI research. Winter is over, the unstrained summer with all its overhype has begun. It caused huge investments to AI research (chart), more enthusiasts and employees in this field, and bold researches. It’s a shame to have no own AI project now. Even KAMAZ develops a friendly AI system. The entry threshold has dropped: one can learn basic neuronet adjustment skills within one year; heaps of tutorial programs are available. Supercomputer hardware got cheaper. Also, a guaranteed market of AIs in form of autopilot cars and, in the future, home robots has emerged.
If the algorithm improvement keeps the pace of about one doubling per year, it means 1,000,000 during 20 years, which certainly will be equal to creating a strong AI beyond a self-improvement threshold. In this case, a lot of people (and me) have good chances to live till the moment and get immortality.
Conclusion
Even not self-improving neural AI system may be unsafe if it get global domination (and will have bad values) or if it will go into confrontation with equally large opposing system. Such confrontation may result in nuclear or nanotech based war, and human population may be hostage especially if both systems have pro-human value system (blackmail).
Anyway slow takeoff AI risks of human extinction are not inevitable and are manageable in ad hoc basis. Slow takeoff does not prevent hard takeoff on later stage of AI development.
Hard takeoff is probably the next logical stage of soft takeoff, as it will continue the trend of accelerating progress. During biological evolution we could witness the same process: slow process of brain enlargement of mammalian species in last tens of million years was replace by almost hard takeoff of Homo sapience intelligence which threatens ecological balance.
Hardtake off is a global catastrophe almost by definition, which needs extraordinary measures to be put into safe way. Maybe the period of almost human level neural net based AI will help us to create instruments of AI control. Maybe we could use simpler neural AIs to control self-improving system.
Another option is that neural AI age will be very short and it is already almost over. In 2016 Google Deep Mind beats Go using complex approach of several AI architectures combined. If such trend continue we could get Strong AI before 2020 and we are completely not ready for it.
The map of quantum (big world) immortality
The main idea of quantum (the name "big world immortality" may be better) is that if I die, I will continue to exist in another branch of the world, where I will not die in the same situation.
This map is not intended to cover all known topics about QI, so I need to clarify my position.
I think that QI may work, but I put it as Plan D for achieving immortality, after life extension(A), cryonics(B) and digital immortality(C). All plans are here.
I also think that it may be proved experimentally, namely that if I turn 120 years or will be only survivor in plane crash I will assign higher probability to it. (But you should not try to prove it before as you will get this information for free in next 100 years.)
There is also nothing quantum in quantum immortality, because it may work in very large non-quantum world, if it is large enough to have my copies. It was also discussed here: Shock level 5: Big worlds and modal realism.
There is nothing good in it also, because most of my survived branches will be very old and ill. But we could use QI to work for us, if we combine it with cryonics. Just sign up for it or have an idea to sign up, and most likely you will find your self in survived branch where you will be resurrected after cryostasis. (The same is true for digital immortality - record more about your self and future FAI will resurrect you, and QI rises chances of it.)
I do not buy "measure" objection. It said that one should care only about his "measure of existence", that is the number of all branches there he exists, and if this number diminish, he is almost dead. But if we take an example of a book, it still exist until at least one copy of it exist. We also can't measure the measure, because it is not clear how to count branches in infinite universe.
I also don't buy ethical objection that QI may lead unstable person to suicide and so we should claim that QI is false. I think that rational understanding of QI is that it or not work, or will result in severe injuries. The idea of soul existence may result in much stronger temptation to suicide as it at least promise another better world, but I never heard that it was hidden because it may result in suicide. Religions try to stop suicide (which is logical in their premises) by adding additional rule against it. So, QI itself is not promoting suicide and personal instability may be the main course of suicidal ideation.
I also think that it is nothing extraordinary in QI idea, and it adds up to normality (in immediate surroundings). We all already witness to examples of similar ideas. That is the anthropic principle and the fact that we found ourselves on habitable planet while most planets are dead. And the fact that I was born, but not my billions potential siblings. Survivalship bias could explain finding one self in very improbable conditions and QI is the same idea projected in the future.
The possibility of big world immortality depends on size of the world and of nature of “I”, that is the personal identity problem solution. This table show how big world immortality depends on these two variables. YES means that big world immortality will work, NO means that it will not work.
Both variables are unknown to us currently. Simply speaking, QI will not work if (actually existing) world is small or if personal identity is very fragile.
My apriori position is that quantum multiverse and very big universe are both true, and that information is all you need for personal identity. This position is most scientific one, as it correlate with current common knowledge about Universe and mind. If I could bet on theories, I would bet on it 50 per cent, and 50 per cent on all other combination of theories.
Even in this case QI may not work. It may work technically, but become unmeasurable, if my mind will suffer so much damage that it will be unable to understand that it works. In this case it will be completely useless, the same way as survival of atoms from which my body is composed is meaningless. But this maybe objected, if we say that only my copies that remember that me is me should be counted (and such copies will surely exist).
From practical point of view QI may help if everything failed, but we can't count on it as it completely unpredictable. QI should be considered only in context of other world-changing ideas, that is simulation argument, doomsday argument, future strong AI.

Levels of global catastrophes: from mild to extinction
Levels of global catastrophes: from mild to extinction
It is important to make a bridge between existential risks and other possible risks. If we say that existential risks are infinitely more important than other risks, we put them out of scope of policymakers (as they can’t work with infinities). We could reach them if we show x-risks as extreme cases of smaller risks. It could be done for most risks (with AI and accelerator's catastrophes are notable exceptions).
Smaller catastrophes play complex role in estimating probability of x-risks. A chain of smaller catastrophes may result in extinction, but one small catastrophe could postpone bigger risks (but it is not good solution). The following table presents different levels of global catastrophes depending of their size. Numbers are mostly arbitrary and are more like placeholders for future updates.
http://immortality-roadmap.com/degradlev.pdf

Global catastrophic risks connected with nuclear weapons and nuclear energy
I created new map: The map of global catastrophic risks connected with nuclear weapons and nuclear energy.
The map is interactive: if you press on the icons on the first page you will get detailed explanation of the topic. But it works only in pdf.
I hope it will make the map more readable but also will help to preserve all detailed information
You could download pdf with working links here: http://immortality-roadmap.com/nukerisk3bookmarks.pdf
Or you may read a presentation here: http://www.slideshare.net/avturchin/global-catastrophic-risks-connected-with-nuclear-weapons-and-energy
Old school map full of text is here: http://immortality-roadmap.com/nukerisk2.pdf
I would like to get a feedback about this new map type: Is it helping readability and understanding? Does it look more rational and convincing?
I include here jpg-screenshorts of the pdf, but working links are only in pdf.






The map of double scenarios of a global catastrophe
Double scenarios of a global catastrophe.
Download pdf here:
http://immortality-roadmap.com/doublecat.pdf

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)