A map: Causal structure of a global catastrophe

4 turchin 21 November 2015 04:07PM

Using the Copernican mediocrity principle to estimate the timing of AI arrival

2 turchin 04 November 2015 11:42AM

Gott famously estimated the future time duration of the Berlin wall's existence:

“Gott first thought of his "Copernicus method" of lifetime estimation in 1969 when stopping at the Berlin Wall and wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing is known; unless there was something special about his visit (which he didn't think there was) this gave a 75% chance that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with 75% confidence that it wouldn't be there in 1993 (1961 + (8/0.25)). In fact, the wall was brought down in 1989, and 1993 was the year in which Gott applied his "Copernicus method" to the lifetime of the human race”. “https://en.wikipedia.org/wiki/J._Richard_Gott

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task. So it is reasonable to apply Gott’s method.

AI research began in 1950, and so is now 65 years old. If we are currently in a random moment during AI research then it could be estimated that there is a 50% probability of AI being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say that the probability of its creation within the next 1300 years is 95 per cent. So we get a rather vague prediction that AI will almost certainly be created within the next 1000 years, and few people would disagree with that. 

But if we include the exponential growth of AI research in this reasoning (the same way as we do in Doomsday argument where we use birth rank instead of time, and thus update the density of population) we get a much earlier predicted date.

We can get data on AI research growth from Luke’s post

“According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning and pattern recognition.”

From this we could conclude that doubling time in AI research is five to ten years (update by adding the recent boom in neural networks which is again five years)

This means that during the next five years more AI research will be conducted than in all the previous years combined. 

If we apply the Copernican principle to this distribution, then there is a 50% probability that AI will be created  within the next five years (i.e. by 2020) and a 95% probability that AI will be created within next 15-20 years, thus it will be almost certainly created before 2035. 

This conclusion itself depends of several assumptions: 

•   AI is possible

•   The exponential growth of AI research will continue 

•   The Copernican principle has been applied correctly.

 

Interestingly this coincides with other methods of AI timing predictions: 

•   Conclusions of the most prominent futurologists (Vinge – 2030, Kurzweil – 2029)

•   Survey of the field of experts

•   Prediction of Singularity based on extrapolation of history acceleration (Forrester – 2026, Panov-Skuns – 2015-2020)

•   Brain emulation roadmap

•   Computer power brain equivalence predictions

•   Plans of major companies

 

It is clear that this implementation of the Copernican principle may have many flaws:

1. The one possible counterargument here is something akin to a Murphy law, specifically one which claims that any particular complex project requires much more time and money before it can be completed. It is not clear how it could be applied to many competing projects. But the field of AI is known to be more difficult than it seems to be for researchers.

2. Also the moment at which I am observing AI research is not really random, as it was in the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it to a time before it become known.

3. The number of researchers is not the same as the number of observers in the original DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.

 

Perhaps this method of future prediction should be tested on simpler tasks. Gott successfully tested his method by predicting the running time of Broadway shows. But now we need something more meaningful, but testable in a one year timeframe. Any ideas?

 

 

What we could learn from the frequency of near-misses in the field of global risks (Happy Bassett-Bordne day!)

8 turchin 28 October 2015 06:28PM

I wrote an article how we could use such data in order to estimate cumulative probability of the nuclear war up to now.

TL;DR: from other domains we know that frequency of close calls is around 100:1 to actual events. If approximate it on nuclear war and assume that there were much more near misses than we know, we could conclude that probability of nuclear war was very high and we live in improbable world there it didn't happen.

 

Yesterday 27 October was Arkhipov day in memory of the man who prevented nuclear war. Today 28 October is Bordne and Bassett day in memory of Americans who prevented another near-war event. Bassett was the man who did most of the work of preventing launch based false attack code, and Bordne made the story public.

The history of the Cold War shows us that there were many occasions when the world stood on the brink of disaster. The most famous of them being the cases of Petrov , Arkhipov  and the recently opened Bordne case in Okinawa 

I know of over ten, but less than a hundred similar cases of varying degrees of reliability. Other global catastrophic risk near-misses are not nuclear, but biological such as the Ebola epidemic, swine flu, bird flu, AIDS, oncoviruses and the SV-40 vaccine.

The pertinent question is whether we have survived as a result of observational selection, or whether these cases are not statistically significant.

In the Cold War era, these types of situations were quite numerous, (such as the Cuban missile crisis). However, in each case, it is difficult to say if the near-miss was actually dangerous. In some cases, the probability of disaster is subjective, that is, according to participants it was large, whereas objectively it was small. Other near-misses could be a real danger, but not be seen by operators.

We can define near-miss  of the first type as a case that meets the both following criteria:

a) safety rules have been violated

b) emergency measures were applied in order to avoid disaster (e.g. emergency breaking of a vehicle, refusal to launch nuclear missiles)

Near-miss can also be defined as an event which, according to some participants of the event, was very dangerous. Or, as an event, during which a number of factors (but not all) of a possible catastrophe coincided. 

Another type of near-miss is the miraculous salvation. This is a situation whereby a disaster was averted by a miracle, that is, it had to happen, but it did not happen because of a happy coincidence of newly emerged circumstances (for example, a bullet stuck in the gun barrel). Obviously, in the case of miraculous salvation a chance catastrophe was much higher than in near-misses of the first type, on which we will now focus. 

We may take the statistics of near-miss cases from other areas where a known correlation between the near-miss and actual event exists, for example, compare the statistics of near-misses and actual accidents with victims in transport.

Industrial research suggests that one crash accounts for 50-100 near-miss cases in different areas, and 10,000 human errors or violations of regulations. (“Gains from Getting Near Misses Reported” )

Another survey estimates 1 to 600 and another 1 to 300 and even 1 to 3000 (but in case of unplanned maintenance).

The spread of estimates from 100 to 3000 is due to the fact that we are considering different industries, and different criteria for evaluating a near-miss.

However, the average ratio of near-misses is in the hundreds, and so we can not conclude that the observed non-occurrence of nuclear war results from observational selection.

On the other hand, we can use a near-miss frequency to estimate the risk of a global catastrophe. We will use a lower estimate of 1 in 100 for the ratio of near-miss to real case, because the type of phenomena for which the level of near-miss is very high will dominate the probability landscape. (For example, if an epidemic is catastrophic in 1 to 1000 cases, and for nuclear disasters the ratio is 1 to 100, the near miss in the nuclear field will dominate).

During the Cold War there were several dozen near-misses, and several near-miss epidemics at the same time, this indicates that at the current level of technology we have about one such case a year, or perhaps more: If we analyze the press, several times a year there is some kind of situation which may lead to the global catastrophe: a threat of war between North and South Korea, an epidemic, a passage of an asteroid, a global crisis. And also many near-misses remain classified.

If the average level of safety in regard to global risks does not improve, the frequency of such cases suggests that a global catastrophe could happen in the next 50-100 years, which coincides with the estimates obtained by other means.

It is important to increase detailed reporting on such cases in the field of global risks, and learn how to make useful conclusions based on them. In addition, we need to reduce the level of near misses in the areas of global risk, by rationally and responsibly increasing the overall level of security measures.

A Map of Currently Available Life Extension Methods

11 turchin 17 October 2015 12:10AM

Extremely large payoff from life extension

We live in special period of time when radical life extension is not far. We just need to survive until the moment when all the necessary technologies will be created.

The positive scenario suggests it could happen by 2050 (plus or minus 20 years), when humanity will create an advanced and powerful AI, highly developed nanotechnologies and a cure for aging.

Many young people could reach the year 2050 without even doing anything special.  

But for many other people an opportunity to extend their life for just 10-20 years is the key to achieving radical life extension (at least for a thousand of years, perhaps even more), because they will be able to survive until the creation of strong life extension technologies.

That is why even a slight life extension today means a potentially eternal prize. This map of the currently available life extension methods could help in it. The map contains a description of the initial stage of plan A from the “Personal Immortality Roadmap” (where plan B is cryonics, plan C – digital immortality and plan D – quantum immortality).

Brain is most important for life extension

The main idea of this map is that all efforts towards life extension must start from our brain, and in fact, they must finish there too.

First of all, you must have the will to conquer aging and death, and do it using scientific methods.

This is probably the most difficult part of the life extension journey. The vast majority of people simply don't think about life extension, while those who do care about it (usually when it's too late) use weak and non-scientific ways and methods; they simply don't understand that the prize of this game is not ten of healthy latter years, but almost eternal life. 

Secondly, you need to develop or mobilize the qualities inside yourself which are necessary for simple, daily procedures, which can almost guarantee life extension by an average of 10-20 years. e.g. avoiding smoking and alcohol consumption, daily mobility, daily intake of medicines and dietary supplements.

Most people find it incredibly difficult to perform simple actions on a permanent basis, for example even taking one pill every day for a year would be too much for most people. Not to mention quitting smoking or regular health check-ups. 

A human who has the motivation to extend his life, a proper understanding of how to achieve it and the necessary skills to realize his plans, should be considered as almost a superman. 

On other hand, while all of our body systems are affected by aging, our brain damage during aging plays the biggest role in total productivity reduction. Even though our crystallized intelligence increases with age, our fluid intelligence, our memory, and the possibility of making radical changes and acquiring new skills all decrease significantly with aging.

And these abilities decrease at the very time when they are needed most – to fight the aging process! Young people usually don't care too much about the aging process, because it's beyond their planning horizon. These qualities are vital in order to build the motivation and skills required to maintain health. 

Thus, this leads to the idea of the map, which says that all main efforts to combat aging must be focused on brain aging. If you can keep your brain youthful, it will create and implement new skills to extend your life, helping you to find new information in a sea of new publications and technologies.

If Alzheimers is the first sign of aging to reach your body, you will have to crawl for a tablet of validol without even knowing that it is harmful. And even worse, you will crystallize some harmful beliefs. A person can think that he is a genius in some fields, receive approval from others, but continue his journey in the wrong direction – in the direction of death. (Of course early detection of cancer and a healthy heart are really important to extend your life, but it will be too difficult to deal with such problems if your brain is not working properly).

The second reason to invest in brain health and regeneration is a direct connection of its state with the state of many other systems in your body through nervous and hormonal connections. 

In order to preserve your brain health we have to use antidepressants, nootropics and substances which promote its regeneration.

The example of Rita Montalchni is incredibly interesting (https://en.wikipedia.org/wiki/Rita_Levi-Montalcini). She administered a nerve growth factor (NGF) as eye drops and lived for 101 years while her twin sister died when she was 91. (Bearing in mind the average life duration difference of twins is six years, we can conclude that she gained about four years.)

Thus, providing that we understand the priority of tasks, life extension now can be reached through three fine-spun blocks: a lifestyle, a medication and the prevention of aging itself.

Collective efforts in life extension

This map doesn't include one really important social aspect of aging prevention. If we could absorb all the money (through crowdfunding), which people use to buy supplements (around 300 billion per year), and use it to perform experiments in the field of life extension instead, we could invent new anti-aging medicine and other life extension tools. These methods and medicines could be used by those who initially donated money for such experiments; they could also benefit from sales of such products. Thus, such crowdfunding would include IPO too.

You won't find other social aspects in the map such as promotion of the idea of the fight against aging, political activism and art. All of these aspects are mentioned in the main Immortality Roadmap.

The map also doesn't include a temporal aspect. Our knowledge about the best methods of life extension changes almost daily. This map contains ideas which are valid in 2015, but it will require a significant update in just five years. If you aim to extend your life you must perform a constant analysis of scientific research in this area. Currently many new methods are appearing every day, e.g. ways of lengthening telomeres and gene therapy. Additionally, the older you are the riskier new methods you should try.

The map of ideas

In fact, the map contains a systemized analysis of ideas, which can lead to life extension, but not a bunch of well-proven tips. In an ideal situation such a map should contain links to research about all the listed items, as well as an evaluation of their real effects, so any help on improving the map will be welcomed.

This map (like all my other maps) is intended to help you navigate through the world of ideas. In this case it includes life extension ideas.

Moreover, one single idea may become a salvation for a person, e.g. eradicating a certain chronic disease. Of course, no single person can complete all of the ideas and suggestions in this map or indeed in any other list. I'm pretty sure that people will not be able to implement more than one advice per month – and I'm no exception.

My approach: I drink alcohol on really rare occasions, I don't smoke (but sometimes I use nicotine wrapping with nootropic objectives), I sleep a lot, I try to walk at least 4 km every day, I avoid risky activities and I always fasten my seatbelt.

I also invest a lot of effort in preventing my brain from aging and in combating depression. (I will provide you with a map about depression and nootropics later).

The pdf of the map is here, and jpg is below.

 

Previous posts with maps:

Simulation map

Digital Immortality Map

Doomsday Argument Map

AGI Safety Solutions Map

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

Immortality Roadmap

 

Future planned maps:

Brut force AIXI-style attack on Identity problem

Ways of mind-improvement

Fermi paradox map

Ways of depression prevention map

Quantum immortality map

Interpretations of quantum mechanics ma

Map of cognitive biases in global risks research

Map of double catastrophes scenarios in global risks

Probability of global catastrophe

Map of unknown unknowns as global risks

Map of reality theories, qualia and God

Map of death levels

Map of resurrections technologies

Map of aging theories

Flowchart «How to build a map»

Map of ideas about artificail explosions in space

Future as Markov chain

 

EDIT: due to temporary hosting error, check the map here: https://www.scribd.com/doc/286606304/Life-Extension-Map



Simulations Map: what is the most probable type of the simulation in which we live?

5 turchin 11 October 2015 05:10AM

There is a chance that we may be living in a computer simulation created by an AI or a future super-civilization. The goal of the simulations map is to depict an overview of all possible simulations. It will help us to estimate the distribution of other multiple simulations inside it along with their measure and probability. This will help us to estimate the probability that we are in a simulation and – if we are – the kind of simulation it is and how it could end.

Simulation argument

The simulation map is based on Bostrom’s simulation argument. Bostrom showed that that “at least one of the following propositions is true:

(1) the human species is very likely to go extinct before reaching a “posthuman” stage;

(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);

(3) we are almost certainly living in a computer simulation”. http://www.simulation-argument.com/simulation.html

The third proposition is the strongest one, because (1) requires that not only human civilization but almost all other technological civilizations should go extinct before they can begin simulations, because non-human civilizations could model human ones and vice versa. This makes (1) extremely strong universal conjecture and therefore very unlikely to be true. It requires that all possible civilizations will kill themselves before they create AI, but we can hardly even imagine such a universal course. If destruction is down to dangerous physical experiments, some civilizations may live in universes with different physics; if it is down to bioweapons, some civilizations would have enough control to prevent them.

In the same way, (2) requires that all super-civilizations with AI will refrain from creating simulations, which is unlikely.

Feasibly there could be some kind of universal physical law against the creation of simulations, but such a law is impossible, because some kinds of simulations already exist, for example human dreaming. During human dreaming very precise simulations of the real world are created (which can’t be distinguished from the real world from within – that is why lucid dreams are so rare). So, we could conclude that after small genetic manipulations it is possible to create a brain that will be 10 times more capable of creating dreams than an ordinary human brain. Such a brain could be used for the creation of simulations and strong AI surely will find more effective ways of doing it. So simulations are technically possible (and qualia is no problem for them as we have qualia in dreams).

Any future strong AI (regardless of whether it is FAI or UFAI) should run at least several million simulations in order to solve the Fermi paradox and to calculate the probability of the appearance of other AIs on other planets, and their possible and most typical goal systems. AI needs this in order to calculate the probability of meeting other AIs in the Universe and the possible consequences of such meetings.

As a result a priory estimation of me being in a simulation is very high, possibly 1000000 to 1. The best chance of lowering this estimation is to find some flaws in the argument, and possible flaws are discussed below.

Most abundant classes of simulations

If we live in a simulation, we are going to be interested in knowing the kind of simulation it is. Probably we belong to the most abundant class of simulations, and to find it we need a map of all possible simulations; an attempt to create one is presented here. 

There are two main reasons for simulation domination: goal and price. Some goals require the creation of very large number of simulations, so such simulations will dominate. Also cheaper and simpler simulations are more likely to be abundant.

Eitan_Zohar suggested http://lesswrong.com/r/discussion/lw/mh6/you_are_mostly_a_simulation/  that FAI will deliberately create an almost infinite number of simulations in order to dominate the total landscape and to ensure that most people will find themselves inside FAI controlled simulations, which will be better for them as in such simulations unbearable suffering can be excluded. (If in the infinite world an almost infinite number of FAIs exist, each of them could not change the landscape of simulation distribution, because its share in all simulations would be infinitely small. So we need a casual trade between an infinite number of FAIs to really change the proportion of simulations. I can't say that it is impossible, but it may be difficult.)

Another possible largest subset of simulations is the one created for leisure and for the education of some kind of high level beings.

The cheapest simulations are simple, low-resolution and me-simulations (one real actor, with the rest of the world around him like a backdrop), similar to human dreams. I assume here that simulations are distributed as the same power law as planets, cars and many other things: smaller and cheaper ones are more abundant.

Simulations could also be laid on one another in so-called Matryoshka simulations where one simulated civilization is simulating other civilizations. The lowest level of any Matryoshka system will be the most populated. If it is a Matryoshka simulation, which consists of historical simulations, the simulation levels in it will be in descending time order, for example the 24th century civilization models the 23rd century one, which in turn models the 22nd century one, which itself models the 21st century simulation. A simulation in a Matryoshka will end on the level where creation of the next level is impossible. The beginning of 21st century simulations will be the most abundant class in Matryoshka simulations (similar to our time period.)

Argument against simulation theory

There are several possible objections against the Simulation argument, but I find them not strong enough to do it. 

1.    Measure

The idea of measure was introduced to quantify the extent of the existence of something, mainly in quantum universe theories. While we don’t know how to actually measure “the measure”, the idea is based on intuition that different observers have different powers of existence, and as a result I could find myself to be one of them with a different probability. For example, if we have three functional copies of me, one of them is the real person, another is a hi-res simulation and the third one is low-res simulation, are my chances of being each of them equal (1/3)?

The «measure» concept is the most fragile element of all simulation arguments. It is based mostly on the idea that all copies have equal measure. But perhaps measure also depends on the energy of calculations. If we have a computer which is using 10 watts of energy to calculate an observer, it may be presented as two parallel computers which are using five watts each. These observers may be divided again until we reach the minimum amount of energy required for calculations, which could be called «Plank observer». In this case our initial 10 watt computer will be equal to – for example – one billion plank observers.

And here we see a great difference in the case of simulations, because simulation creators have to spend less energy on calculations (or it would be easier to make real world experiments). But in this case such simulations will have a lower measure. But if the total number of all simulations is large, then the total measure of all simulations will still be higher than the measure of real worlds. But if most real worlds end with global catastrophe, the result would be an even higher proportion of real worlds which could outweigh simulations after all.

2. Universal AI catastrophe

One possible universal global catastrophe could happen where a civilization develops an AI-overlord, but any AI will meet some kind of unresolvable math and philosophical problems which will terminate it at its early stages, before it can create many simulations. See an overview of this type of problem in my map “AI failures level”.

3. Universal ethics

Another idea is that all AIs converge to some kind of ethics and decision theory which prevent them from creating simulations, or they create p-zombie simulations only. I am skeptical about that.

4. Infinity problems

If everything possible exists or if the universe is infinite (which are equal statements) the proportion between two infinite sets is meaningless. We could overcome this conjecture using the idea of mathematical limit: if we take a bigger universe and longer periods of time, the simulations will be more and more abundant within them.

But in all cases, in the infinite universe any world exists an infinite number of times, and this means that my copies exist in real worlds an infinite number of times, regardless of whether I am in a simulation or not.

5. Non-uniform measure over Universe (actuality)

Contemporary physics is based on the idea that everything that exists, exists in equal sense, meaning that the Sun and very remote stars have the same measure of existence, even in casually separated regions of the universe. But if our region of space-time is somehow more real, it may change simulation distribution which will favor real worlds. 

6. Flux universe

The same copies of me exist in many different real and simulated worlds. In simple form it means that the notion that “I am in one specific world” is meaningless, but the distribution of different interpretations of the world is reflected in the probabilities of different events.

E.g. the higher the chances that I am in a simulation, the bigger the probability that I will experience some kind of miracles during my lifetime. (Many miracles almost prove that you are in simulation, like flying in dreams.) But here correlation is not causation.

The stronger version of the same principle implies that I am one in many different worlds, and I could manipulate the probability of finding myself in a set of possible worlds, basically by forgetting who I am and becoming equal to a larger set of observers. It may work without any new physics, it only requires changing the number of similar observers, and if such observers are Turing computer programs, they could manipulate their own numbers quite easily.

Higher levels of flux theory do require new physics or at least quantum mechanics in a many worlds interpretation. In it different interpretations of the world outside of the observer could interact with each other or experience some kind of interference.

See further discussion about a flux universe here: http://lesswrong.com/lw/mgd/the_consequences_of_dust_theory/

7. Bolzmann brains outweigh simulations

It may turn out that BBs outweigh both real worlds and simulations. This may not be a problem from a planning point of view because most BBs correspond to some real copies of me.

But if we take this approach to solve the BBs problem, we will have to use it in the simulation problem as well, meaning: "I am not in a simulation because for any simulation, there exists a real world with the same “me”. It is counterintuitive.

Simulation and global risks

Simulations may be switched off or may simulate worlds which are near global catastrophe. Such worlds may be of special interest for future AI because they help to model the Fermi paradox and they are good for use as games.

Miracles in simulations

The map also has blocks about types of simulation hosts, about many level simulations, plus ethics and miracles in simulations.

The main point about simulation is that it disturbs the random distribution of observers. In the real world I would find myself in mediocre situations, but simulations are more focused on special events and miracles (think about movies, dreams and novels). The more interesting my life is, the less chance that it is real.

If we are in simulation we should expect more global risks, strange events and miracles, so being in a simulation is changing our probability expectation of different occurrences.  

This map is parallel to the Doomsday argument map.

Estimations given in the map of the number of different types of simulation or required flops are more like place holders, and may be several orders of magnitude higher or lower.

I think that this map is rather preliminary and its main conclusions may be updated many times.

The pdf of the map is here, and jpg is below.

Previous posts with maps:

Digital Immortality Map

Doomsday Argument Map

AGI Safety Solutions Map

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

Immortality Roadmap



Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI

6 turchin 02 October 2015 10:21PM

If someone has died it doesn’t mean that you should stop trying to return him to life. There is one clear thing that you should do (after cryonics): collect as much information about the person as possible, as well as store his DNA sample, and hope that future AI will return him to life based on this information.

 

Two meanings of “Digital immortality”

The term “Digital immortality” is often confused with the notion of mind uploading, as the end result is almost the same: a simulated brain in a computer. https://en.wikipedia.org/wiki/Digital_immortality

But here, by the term “Digital immortality” I mean reconstruction of the person based on his digital footprint and other traces by future AI after this person death.

Mind uploading in the future will happen while the original is still alive (or while the brain exists in a frozen state) and will be connected to a computer by some kind of sophisticated interface, or the brain will be scanned. It cannot be done currently. 

On the other hand, reconstruction based on traces will be done by future AI. So we just need to leave enough traces and we could do it now.

But we don’t know how much traces are enough, so basically we should try to produce and preserve as many traces as possible. However, not all traces are equal in their predictive value. Some are almost random, and others are so common that they do not provide any new information about the person.

 

Cheapest way to immortality

Creating traces is an affordable way of reaching immortality. It could even be done for another person after his death, if we start to collect all possible information about him. 

Basically I am surprised that people don’t do it all the time. It could be done in a simple form almost for free and in the background – just start a video recording app on your notebook, and record everything into shared folder connected with a free cloud. (Evocam program for Mac is excellent, and mail.ru provides up 100gb free).

But really good digital immortality require 2-3 month commitment for self-description with regular every year updates. It may also require maximum several thousand dollars investment in durable disks, DNA testing, videorecorders, and free time to do it.

I understand how to set up this process and could help anyone interested.

 

Identity

The idea of personal identity is outside the scope of this map. I have another map on this topic (now in draft), I assume that the problem of personal identity will be solved in the future. Perhaps we will prove that information only is enough to solve the problem, or we will find that continuity of consciousness, but we will be able to construct mechanisms to transfer this identity independently of information. 

Digital immortality requires a very weak notion of identity. i.e. a model of behavior and thought processes is enough for an identity. This model may have some differences from the original, which I call “one night difference”, that is the typical difference between me-yesterday and me-today after one night's sleep. The meaningful part of this information has size from several megabytes to gigabits, but we may need to collect much more information as we can’t now extract meaningful part from random.

DI may also be based on even weaker notion of identity, that anyone who thinks that he is me, is me. Weaker notions of identity require less information to be preserved, and in last case it may be around 10K bytes (including name, indexical information and basic traits description)

But the question about the number of traces needed to create an almost exact model of a personality is still open. It also depends on predictive power of future AI: the stronger is AI, the less traces are enough.

Digital immortality is plan C in my Immortality Roadmap, where Plan A is life extension and Plan B is cryonics; it is not plan A, because it requires solving the identity problem plus the existence of powerful future AI.

 

Self-description

I created my first version of it in the year 1990 when I was 16, immediately after I had finished school. It included association tables, drawings and lists of all people known to me, as well as some art, memoires, audiorecordings and encyclopedia od everyday objects around me.

There are several approaches to achieving digital immortality. The most popular one is passive that is simply videorecording of everything you do.

My idea was that a person can actively describe himself from inside. He may find and declare the most important facts about himself. He may run specific tests that will reveal hidden levels of his mind and sub consciousness. He can write a diary and memoirs. That is why I called my digital immortality project “self-description”.

 

Structure of the map

This map consists of two parts: theoretical and practical. The theoretical part lists basic assumptions and several possible approaches to reconstructing an individual, in which he is considered as a black box. If real neuron actions will become observable, the "box" will become transparent and real uploading will be possible.

There are several steps in the practical part:

- The first step includes all the methods of fixing information while the person of interest is alive.

- The second step is about preservation of the information.

- The third step is about what should be done to improve and promote the process.

- The final fourth step is about the reconstruction of the individual, which will be performed by AI after his death. In fact it may happen soon, may be in next 20-50 years.

There are several unknowns in DI, including the identity problem, the size and type of information required to create an exact model of the person, and the required power of future AI to operate the process. These and other problems are listed in the box on the right corner of the map.

The pdf of the map is here, and jpg is below.

 

Previous posts with maps:

Doomsday Argument Map

AGI Safety Solutions Map

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

Immortality Roadmap

 

 

 

 

 

 

 

 

 

 

 

 

Doomsday Argument Map

6 turchin 14 September 2015 03:04PM

The Doomsday argument (DA) is controversial idea that humanity has a higher probability of extinction based purely on probabilistic arguments. The DA is based on the proposition that I will most likely find myself somewhere in the middle of humanity's time in existence (but not in its early time based on the expectation that humanity may exist a very long time on Earth.)

­

There were many different definitions of the DA and methods of calculating it, as well as rebuttals. As a result we have developed a complex group of ideas, and the goal of the map is to try to bring some order to it. The map consists of various authors' ideas. I think that I haven't caught all existing ideas, and the map could be improved significantly – but some feedback is needed on this stage.

The map has the following structure: the horizontal axis consists of various sampling methods (notably SIA and SSA), and the vertical axis has various approaches to the DA, mostly Gott's (unconditional) and Carters’s (update of probability of existing risk). But many important ideas can’t fit in this scheme precisely, and these have been added on the right hand side. 

In the lower rows the link between the DA and similar arguments is shown, namely the Fermi paradox, Simulation argument and Anthropic shadow, which is a change in the probability assessment of natural catastrophes based on observer selection effects. 

On the right part of the map different ways of DA rebuttal are listed and also a vertical raw of possible positive solutions.

I think that the DA is mostly true but may not mean inevitable extinction.

Several interesting ideas may need additional clarification and they will also put light on the basis of my position on DA.

Meta-DA

The first of these ideas is that the most reasonable version of the DA at our current stage of knowledge is something that may be called the meta-DA, which presents our uncertainty about the correctness of any DA-style theories and our worry that the DA may indeed be true.

The meta-DA is a Bayesian superstructure built upon the field of DA theories. The meta-DA tells us that we should attribute non-zero Bayesian probability to one or several DA-theories (at least until they are disproved in a generally accepted way) and since the DA itself is a probabilistic argument, then these probabilities should be combined.

As a result the Meta-DA means an increase of total existential risks until we disprove (or prove) all versions of the DA, which may be not easy. We should anticipate such an increase in risk as a demand to be more precautious but not in a fatalist “doom imminent” way.

Reference class

The second idea concerns the so-called problem of reference class that is the problem of which class of observer I belong to in the light of question of the DA. Am I randomly chosen from all animals, humans, scientists or observer-moments?

The proposed solution is that the DA is true for any referent class from which I am randomly chosen, but the mere definition of the referent class is defining the type it will end as; it should not be global catastrophe. In short, any referent class has its own end. For example, if I am randomly chosen from the class of all humans, than the end of the class may mean not an extinction but a creation of the beginning of the class of superhumans.

But any suitable candidate for the DA-logic referent class must provide the randomness of my position in it. In that case I can’t be a random example of the class of mammals, because I am able to think about the DA and a zebra can’t.

As a result the most natural (i.e. providing a truly random distribution of observers) referent class is a class of observers who know about and can think about DA. The ability to understand the DA is the real difference between conscious and unconscious observers.

But this class is small and young. It started in 1983 with the works of Carter and now includes perhaps several thousand observers. If I am in the middle of it, there will be just several thousand more DA-aware observers and there will only be several decades more before the class ends (which unpleasantly will coincide with the expected “Singularity” and other x-risks). (This idea was clear to Carter and also is used in so called in so-called Self-referencing doomsday argument rebuttal https://en.wikipedia.org/wiki/Self-referencing_doomsday_argument_rebuttal)

This may not necessarily mean the end of the global catastrophe, but it may mean that there will soon be a DA rebuttal. (And we could probably choose how to fulfill the DA prophecy by manipulating of the number of observers in the referent class.)

DA and medium life expectancy

DA is not unnatural way to see in the future as it seems to be. The more natural way to understand the DA is to see it as an instrument to estimate medium life expectancy in the certain group.

For example, I think that I can estimate medium human life expectancy based on your age. If you are X years old, human medium life expectancy is around 2X. “Around” here is very vague term as it more like order of magnitude. For example if you are 25 years old, I could think that medium human life expectancy is several decades years and independently I know its true (but not 10 millisecond or 1 million years). And as medium life expectancy is also may be applied to the person in question it may mean that he will also most probably live the same time (if we will not do something serious about life extension). So there is no magic or inevitable fate in DA.

But if we apply the same logic to civilization existence, and will count only a civilization capable to self-destruction, e.g. roughly after 1945, or 70 years old, it would provide medium life expectancy of technological civilizations around 140 years, which extremely short compare to our estimation that we may exist millions of years and colonize the Galaxy.

Anthropic shadow and fragility of our environment

|t its core is the idea that as a result of natural selection we have more chances to find ourselves in the world, which is in the meta-stable condition on the border of existential catastrophe, because some catastrophe may be long overdue. (Also because universal human minds may require constantly changing natural conditions in order to make useful adaptations, which implies an unstable climate – and we live in period of ice ages)

In such a world, even small human actions could result in global catastrophe. For example if we pierce a overpressured ball with a needle.

The most plausible candidates for such metastable conditions are processes that must have happened a long time ago in most worlds, but we can only find ourselves in the world where they are not. For the Earth it may be sudden change of the atmosphere to a Venusian subtype (runaway global warming). This means that small human actions could have a much stronger result for atmospheric stability (probably because the largest accumulation of methane hydrates in earth's history resides on the Arctic Ocean floor, which is capable of a sudden release: see https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis). Another option for meta-stability is provoking a strong supervolcane eruption via some kind of earth crust penetration (see “Geoingineering gone awry” http://arxiv.org/abs/physics/0308058)

Thermodynamic version of the DA 

Also for the western reader is probably unknown thermodynamic version of DA suggested in Strugatsky’s novel “Definitely maybe” (Originally named “A Billion years before the end of the world”). It suggests that we live in thermodynamic fluctuation and as smaller and simpler fluctuations are more probable, there should be a force against complexity, AI development or our existence in general. Plot of the novel is circled around pseudo magical force, which distract best scientists from work using girls, money or crime. After long investigation they found that it is impersonal force against complexity.

This map is a sub-map for the planned map “Probability of global catastrophe” and its parallel maps are a “Simulation argument map” and a “Fermi paradox map” (both are in early drafts).

PDF of the map: http://immortality-roadmap.com/DAmap.pdf 

 

Previous posts with maps:

AGI Safety Solutions Map

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

Immortality Roadmap

Immortality Roadmap

9 turchin 28 July 2015 09:27PM

Added: Direct link on pdf: http://immortality-roadmap.com/IMMORTEN.pdf

 

A lot of people value indefinite life extension, but most have their own preferred method of achieving it. The goal of this map is to present all known ways of radical life extension in an orderly and useful way.

A rational person could choose to implement all of these plans or to concentrate only on one of them, depending on his available resources, age and situation. Such actions may be personal or social; both are necessary.

The roadmap consists of several plans; each of them acts as insurance in the case of failure of the previous plan. (The roadmap has a similar structure to the "Plan of action to prevent human extinction risks".) The first two plans contain two rows, one of which represents personal actions or medical procedures, and the other represents any collective activity required.

Plan A. The most obvious way to reach immortality is to survive until the creation of Friendly AI; in that case if you are young enough and optimistic enough, you can simply do nothing – or just fund MIRI. However, if you are older, you have to jump from one method of life extension to the next as they become available. So plan A is a relay race of life extension methods, until the problem of death is solved.

This plan includes actions to defeat aging, to grow and replace diseased organs with new bioengineered ones, to get a nanotech body and in the end to be scanned into a computer. It is an optimized sequence of events, and depends on two things – your personal actions (such as regular medical checkups), and collective actions such as civil activism and scientific research funding.

Plan B. However, if Plan A fails, i.e. if you die before the creation of superintelligence, there is Plan B, which is cryonics. Some simple steps can be taken now, such as calling your nearest cryocompany about a contract.

Plan C. Unfortunately, cryonics could also fail, and in that case Plan C is invoked. Of course it is much worse – less reliable and less proven. Plan C is so-called digital immortality, where one could be returned to life based on existing recorded information about that person. It is not a particularly good plan, because we are not sure how to solve the identity problem which will arise, and we don’t know if the collected amount of information would be enough. But it is still better than nothing.

Plan D. Lastly, if Plan C fails, we have Plan D. It is not a plan in fact, it is just hope or a bet that immortality already exists somehow: perhaps there is quantum immortality, or perhaps future AI will bring us back to life.

The first three plans demand particular actions now: we need to prepare for all of them simultaneously. All of the plans will lead to the same result: our minds will be uploaded into a computer with help of highly developed AI.

The plans could also help each other. Digital immortality data may help to fill any gaps in the memory of a cryopreserved person. Also cryonics is raising chances that quantum immortality will result in something useful: you have more chance of being cryopreserved and successfully revived than living naturally until you are 120 years old.

After you have become immortal with the help of Friendly AI you might exist until the end of the Universe or even beyond – see my map “How to prevent the end of the Universe”.

A map of currently available methods of life extension is a sub-map of this one and will published later.

The map was made in collaboration with Maria Konovalenko and Michael Batin and its earlier version was presented in August 2014 in Aubrey de Grey’s conference Rejuvenation Biotechnology.

Pdf of the map is here

Previous posts:

AGI Safety Solutions Map

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

 

 

 

 

 

 

(scroll down to see the map)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(scroll down to see the map)

 

 

 

 

 

 

 

 

 

 

 

 

(scroll down to see the map)

 

 

 

 

 

 

 

 

 

 

AGI Safety Solutions Map

10 turchin 21 July 2015 02:41PM

When I started to work on the map of AI safety solutions, I wanted to illustrate the excellent article “Responses to Catastrophic AGI Risk: A Survey” by Kaj Sotala and Roman V. Yampolskiy, 2013, which I strongly recommend.

However, during the process I had a number of ideas to expand the classification of the proposed ways to create safe AI. In their article there are three main categories: social constraints, external constraints and internal constraints.

I added three more categories: "AI is used to create a safe AI", "Multi-level solutions" and "meta-level", which describes the general requirements for any AI safety theory.

In addition, I divided the solutions into simple and complex. Simple are the ones whose recipe we know today. For example: “do not create any AI”. Most of these solutions are weak, but they are easy to implement.

Complex solutions require extensive research and the creation of complex mathematical models for their implementation, and could potentially be much stronger. But the odds are less that there will be time to realize them and implement successfully.

After aforementioned article several new ideas about AI safety appeared.

These new ideas in the map are based primarily on the works of Ben Goertzel, Stuart Armstrong and Paul Christiano. But probably many more exist and was published but didn’t come to my attention.

Moreover, I have some ideas of my own about how to create a safe AI and I have added them into the map too. Among them I would like to point out the following ideas:

1.     Restriction of self-improvement of the AI. Just as a nuclear reactor is controlled by regulation the intensity of the chain reaction, one may try to control AI by limiting its ability to self-improve in various ways.

2.     Capture the beginning of dangerous self-improvement. At the start of potentially dangerous AI it has a moment of critical vulnerability, just as a ballistic missile is most vulnerable at the start. Imagine that AI gained an unauthorized malignant goal system and started to strengthen itself. At the beginning of this process, it is still weak, and if it is below the level of human intelligence at this point, it may be still more stupid than the average human even after several cycles of self-empowerment. Let's say it has an IQ of 50 and after self-improvement it rises to 90. At this level it is already committing violations that can be observed from the outside (especially unauthorized self-improving), but does not yet have the ability to hide them. At this point in time, you can turn it off. Alas, this idea would not work in all cases, as some of the objectives may become hazardous gradually as the scale grows (1000 paperclips are safe, one billion are dangerous, 10 power 20 are x-risk). This idea was put forward by Ben Goertzel.

3.     AI constitution. First, in order to describe the Friendly AI and human values we can use the existing body of criminal and other laws. (And if we create an AI that does not comply with criminal law, we are committing a crime ourselves.) Second, to describe the rules governing the conduct of AI, we can create a complex set of rules (laws that are much more complex than Asimov’s three laws), which will include everything we want from AI. This set of rules can be checked in advance by specialized AI, which calculates only the way in which the application of these rules can go wrong (something like mathematical proofs on the basis of these rules).

4.     "Philosophical landmines." In the map of AI failure levels I have listed a number of ways in which high-level AI may halt when faced with intractable mathematical tasks or complex philosophical problems. One may try to fight high-level AI using "landmines", that is, putting it in a situation where it will have to solve some problem, but within this problem is encoded more complex problems, the solving of which will cause it to halt or crash. These problems may include Godelian mathematical problems, nihilistic rejection of any goal system or the inability of AI to prove that it actually exists.

5. Multi-layer protection. The idea here is not that if we apply several methods at the same time, the likelihood of their success will add up, this notion will not work if all methods are weak. The idea is that the methods of protection work together to protect the object from all sides. In a sense, human society works the same way: a child is educated by an example as well as by rules of conduct, then he begins to understand the importance of compliance with these rules, but also at the same time the law, police and neighbours are watching him, so he knows that criminal acts will put him in jail. As a result, lawful behaviour is his goal which he finds rational to obey. This idea can be reflected in the specific architecture of AI, which will have at its core a set of immutable rules, around it will be built human emulation which will make high-level decisions, and complex tasks will be delegated to a narrow Tool AIs. In addition, independent emulation (conscience) will check the ethics of its decisions. Decisions will first be tested in a multi-level virtual reality, and the ability of self-improvement of the whole system will be significantly limited. That is, it will have an IQ of 300, but not a million. This will make it effective in solving aging and global risks, but it will also be predictable and understandable to us. The scope of its jurisdiction should be limited to a few important factors: prevention of global risks, death prevention and the prevention of war and violence. But we should not trust it in such an ethically delicate topic as prevention of suffering, which will be addressed with the help of conventional methods.

This map could be useful for the following applications:

1. As illustrative material in the discussions. Often people find solutions ad hoc, once they learn about the problem of friendly AI or are focused on one of their favourite solutions.

2. As a quick way to check whether a new solution really has been found.

3. As a tool to discover new solutions. Any systematisation creates "free cells" to fill for which one can come up with new solutions. One can also combine existing solutions or be inspired by them.

4. There are several new ideas in the map.

A companion to this map is the map of AI failures levels. In addition, this map is subordinated to the map of global risk prevention methods and corresponds to the block "Creating Friendly AI" Plan A2 within it.

The pdf of the map is here: http://immortality-roadmap.com/aisafety.pdf

 

Previous posts:

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

 

 

(scroll down to see the map)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life

0 turchin 18 July 2015 01:13PM

I have non-zero probability to die next year. In my age of 42 it is not less than 1 per cent, and probably more. I could do many investment which will slightly lower my chance of dying – from healthy life style to cryo contract.  And I did many of them.

From economical point of view the death is at least loosing all you capital.

If my net worth is something like one million (mostly real estate and art), and I have 1 per cent chance to die, it is equal to loosing 10 k a year. But in fact more, because death it self is so unpleasant that it has large negative monetary value. And also I should include the cost of lost opportunities.

Once I had a discussion with Vladimir Nesov about what is better: to fight to immortality, or to create Friendly AI which will explain what is really good. My position was that immortality is better because it is measurable, knowable, and has instrumental value for most other goals, and also includes prevention of worst thing on earth which is the Death. Nesov said (as I remember) that personal immortality does not matter as much total value of humanity existence, and more over, his personal existence has no much value at all. All what we need to do is to create Friendly AI. I find his words contradictory because if his existence does not matter, than any human existence also doesn’t matter, because there is nothing special about him.

But later I concluded that the best is to make bets that will raise the probability of my personal immortality, existential risks prevention and creation of friendly AI simultaneously. Because it is easy to imagine situation where research in personal immortality like creation technology for longevity genes delivery will contradict our goal of existential risks reduction because the same technology could be used for creating dangerous viruses.

The best way here is invest in creating regulating authority which will be able to balance these needs, and it can’t be friendly AI because such regulation needed before it will be created.

That is why I think that US needs Transhumanist president. A real person whose value system I can understand and support. And that is why I support Zoltan Istvan for 2016 campaign.

Me and Exponential Technologies Institute donated 10 000 USD for Immortality bus project. This bus will be the start of Presidential campaign for the writer of “Transhumanist wager”. 7 film crews agreed to cover the event. It will create high publicity and cover all topics of immortality, aging research, Friendly AI and x-risks prevention. It will help to raise more funds for such type of research. 

 

View more: Prev | Next