There are two views on the best strategy among transhumanists and rationalists: The first involves the belief that one must invest in life extension technologies, and the latter, that it is necessary to create an aligned AI that will solve all problems, including giving us immortality or even something better. In our article, we showed that these two points of view do not contradict each other, because it is the development of AI that will be the main driver of increased life expectancy in the coming years, and as a result, even currently living people can benefit (and contribute) from the future superintelligence in several ways.

Firstly, because the use of machine learning, narrow AI will allow the study of aging biomarkers and combinations of geroprotectors, and this will produce an increase in life expectancy of several years, which means that tens of millions of people will live long enough to survive until the date of the creation of the superintelligence (whenever it happens) and will be saved from death. In other words, the current application of narrow AI to life extension provides us with a chance to jump on the “longevity escape velocity”, and the rapid growth of the AI ​​will be the main factor that will, like the wind, help to increase this velocity.

Secondly, we can—here in the present—utilize some possibilities of the future superintelligence, by collecting data for “digital immortality”. Based on these data, the future AI can reconstruct the exact model of our personality, and also solve the identity problem. At the same time, the collection of medical data about the body will help both now—as it can train machine learning systems in predicting diseases—and in the future, when it becomes part of digital immortality. By subscribing to cryonics, we can also tap into the power of the future superintelligence, since without it, a successful reading of information from the frozen brain is impossible.

Thirdly, there are some grounds for assuming that medical AI will be safer. It is clear that fooming can occur with any AI. But the development of medical AI will accelerate the development of BCI interfaces, such as a Neuralink, and this will increase the chance of AI not appearing separately from humans, but as a product of integration with a person. As a result, a human mind will remain part of the AI, ​​and from within, the human will direct its goal function. Actually, this is also Elon Musk’s vision, and he wants to commercialize his Neuralink through the treatment of diseases. In addition, if we assume that the principle of orthogonality may have exceptions, then any medical AI aimed at curing humans will be more likely to have benevolence as its terminal goal.

As a result, by developing AI for life extension, we make AI more safe, and increase the number of people who will survive up to the creation of superintelligence. Thus, there is no contradiction between the two main approaches in improving human life via the use of new technologies.

Moreover, for a radical life extension with the help of AI, it is necessary to take concrete steps right now: to collect data for digital immortality, to join patient organizations in order to combat aging, and to participate in clinical trials involving combinations of geroprotectors, and computer analysis of biomarkers. We see our article as a motivational pitch that will encourage the reader to fight for a personal and global radical life extension.

In order to substantiate all of these conclusions, we conducted a huge analysis of existing start-ups and directions in the field of AI applications for life extension, and we have identified the beginnings of many of these trends, fixed in the specific business plans of companies.

Michael Batin, Alexey Turchin, Markov Sergey, Alice Zhila, David Denkenberger

“Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence”

Informatica 41 (2017) 401–417: http://www.informatica.si/index.php/informatica/article/view/1797

New Comment
5 comments, sorted by Click to highlight new comments since:
[-][anonymous]20

This is obviously the one realistic way for most of us to see the far-ish future. I have put extensive thought into this myself, looking at closer to practical methods that exist now, and I do think even the relatively crude 'machine learning' methods in use now could lead to massively better outcomes.

If you think about the problem a little bit, if you could toss every scientific paper into a machine of arbitrary intelligence (but not so much intelligence it can simulate the whole universe particle by particle, it's just super-smart but not a deity), and asked it to cure someone's cancer, it probably couldn't do it. There are too many holes in the data. Too many errors and contradictory findings and vague language in human published papers. The machine would have no adequate model of the relationship between actions taken by it's waldos and outcome.

I actually think to get to a reliable "cure" of what ails us humans, you would unfortunately have to start over at first principles. You would need probably on the order of millions of waldos : basic bioscience test cells. And a vast library of samples and automated synthetic compound labs. What the machine would be doing is building a model by recreating all of the basic science in a more rigorous manner than humans ever did.

Want to know what happens when you mix peptide <A23> with ribosome enzyme <P40>? Predict the outcome with your existing model. Test it. Analyze the bindings.

One advantage of using robotic waldos instead of human lab techs is reliable replication. You would have an exact log of every step taken, and the sensor data from the robot as the step is completed. Replication is a matter of replaying the same sequence with the same robot type in a different lab.

So the basics of it is that you build a model up of higher level outcomes primarily using your data from lower level experiments. This is ultimately how you will reach a level where you can build a custom molecule or peptide to address a specific problem. With your model, you can predict the probable outcomes of that peptide interacting with the patient. This reduces how many you have to try.

You would also need a vast array of testing apparatus. Not lab animals, synthetic organs 3d printed from human cells. Entire mockup human bodies (everything but the brain) of separate organs in separate life support containers. When you have an unexpected interaction or problem, you isolate it with binary searches. If a drug causes liver problems, you don't shrug your shoulders, you send a sample of the liver cells and the drug back to the basic science wing. You begin subdividing all the liver proteins in half until you discover the exact molecule the drug interferes with, and the exact binding site, and this informs your search for future drugs.

In summary, the advantages, even with mere existing techniques include : a. Ability to do mass scale experimentation. b. You do not need data to meet some arbitrary threshold of significance to use it. If there is a tiny relationship between 2 variables, a floating point neural weight can hold it, even if the relationship is very small. c. Replicability of experiments. d. A model of biology more complex than any human mind can hold. e. Ability to use advanced planners, similar to what is demonstrated in AlphaGo, to make intelligent guesses for drug discovery. f. Ability to design a drug just in time to help a specific patient.

I want to elaborate on (f) a little. We should be weighting the risks and rewards properly. If a patient is actively dying, the closer they are to their predicted death, the more risks we should take in an attempt to save them. The closer death is, the more dramatic the intervention. Not only will this periodically save people's lives, but it allows for rapid progress. Suppose the person is dying from infection, and the AI agent recommends a new antiviral molecule that works against a sample of the person's infection in the lab. The molecule gets rapid synthesized automatically and delivered by an injection robot within hours of development. And then the patient suddenly develops severe liver failure and dies.

The NEXT time this happens, the agent knows the liver is a problem. It made copies of the person's liver from their corpse (and of course froze their brain), and has investigated the failure and found a variant on the molecule that works. So the next patient is treated successfully.

Thanks for sharing your vision. I will work on the chapter of a book on the same topic, and hope to incorporate your ideas in it, if you don't mind, with proper attribution.

You seem to imply that medical AI would be a safer domain to invest in AI development than other domains, yet you offer few details about this. I know you say in the paper that it's outside the scope of this work, but can you give a summary or an outline of your thoughts here? Right now it reads to me, rather unconvincingly, that somehow it will be safer because humans will be trying to use it for human-valued purposes, but this is contrary to the orthogonality thesis so this implies you have some reason to think the orthogonality thesis is, if not wrong, at least weaker than, say, MIRI presents it to be.

Yes, the editors asked me to cut my thoughts about domains specific effects in AI safety as they will be off topic in the already large paper. Also they suggested that form the point of view of "fooming" military AI will be safer than medical AI, as they militaries invest more in control of their systems, than startups of biohackers, and it seems to be true if fooming and orthogonality thesis hold.

However, domain specificity may affect the orthogonality thesis in the following way. If an agent has an instrument X, it may try to solve all its tasks by using this instrument. For example, - let's take a human example - a businessman my pay for everything he wants, and a hitman will try to use violence for everything he wants. So where will be a correlation between available goals and available instruments.

In case of (stupid) AI, if you ask military AI to solve cancer problem, it will make a nuclear strike on all continents, but if you ask medical AI, it will create a cure for cancer. Will this hold for superintelligence is not currently known.

Another reason why medical AI will be safer is that it will be more integrated with human mind from the start, as it will be based around brain-computer interfaces, human uploads or different augmentation technics and because of this, it will be closer to human thinking process (so there will be no common sense understanding errors) and even may be to human values, if humans will be at its core, as it could happen if medical AI produses hansonian ems world, and superintelligence will appear as augmentation of uploads, or as a result of collective behaviour of many augmented uploads. I am going to explore all these hypothetical scenarios in another paper for which I have an early draft. (edit: grammar)

Great! I look forward to hearing about such ideas in more detail.