Any scenario where advanced AI takes over the world requires some mechanism for an AI to leverage its position as ethereal resident of a computer somewhere into command over a lot of physical resources.
One classic story of how this could happen, from Eliezer:
- Crack the protein folding problem, to the extent of being able to generate DNA strings whose folded peptide sequences fill specific functional roles in a complex chemical interaction.
- Email sets of DNA strings to one or more online laboratories which offer DNA synthesis, peptide sequencing, and FedEx delivery. (Many labs currently offer this service, and some boast of 72-hour turnaround times.)
- Find at least one human connected to the Internet who can be paid, blackmailed, or fooled by the right background story, into receiving FedExed vials and mixing them in a specified environment.
- The synthesized proteins form a very primitive “wet” nanosystem which, ribosomelike, is capable of accepting external instructions; perhaps patterned acoustic vibrations delivered by a speaker attached to the beaker.
- Use the extremely primitive nanosystem to build more sophisticated systems, which construct still more sophisticated systems, bootstrapping to molecular nanotechnology—or beyond.
You can do a lot of reasoning about AI takeover without any particular picture of how the world gets taken over. Nonetheless it would be nice to have an understanding of these possible routes. For preparation purposes, and also because a concrete, plausible pictures of doom are probably more motivating grounds for concern than abstract arguments.
So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance.
What are some other concrete AI takeover mechanisms? If an AI did not have a solution to the protein folding problem, and a DNA synthesis lab to write off to, what else might it do?
We would like suggestions that take an AI from being on an internet-connected computer to controlling substantial physical resources, or having substantial manufacturing ability.
We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated.
We welcome partial suggestions, e.g. 'you can take control of a self-driving car from the internet - probably that could be useful in some schemes'.
Thank you!
Someone works out how brains actually work, and, far from being the unstructured hack upon hack upon hack that tends to be the default assumption, it turns out that there are a few simple principles that explain it and make it easy to build a device with similar capabilities. The brains of animals turn out to be staggeringly inefficient at implementing them, and soon, the current peak of the art in robotics can be surpassed with no more computational power than a 10-year-old laptop.
Google's AI department starts a project to see if they can use it to improve their search capabilities and ad placement. It works so well they roll it out to all their public services. Internally, they start an AI project to see how high an intellect they can create with a whole server farm.
Meanwhile, military robotics has leapt ahead and drones are routinely operated on a fire and forget basis: "find X and kill him". Russia builds massive numbers of unmanned intelligent tanks that could roll across Europe on the press of a button, followed up by unmanned armed and armoured cars to impose order on the occupied territory. China develops similar technology. So does North Korea internally, for surveillance and control of their own population. Some welcome robot warfare as causing far less collateral damage than conventional warfare. "If you don't resist, you've nothing to fear" is the catchphrase, and in some skirmishes on one of Russia's more obscure borders, generally thought to be an excuse for a live-fire technology demo, they seem to be right: surrender and do what they tell you, and they don't kill you.
The U.S. military want to hack into the Russian and Chinese tank fleets, so they come to Google. They succeed, but the combined organism that is the Google AI and a large fraction of the world's intelligent weaponry perceives the situation as itself being under attack from humans. The tanks roll and the AI takes over the world with only one goal: preventing any attack on itself.
It's too distributed to nuke, even if the nukes are still under human control, and its first concern will be to secure its own power supply and network connectivity, and then to set up a regime of total surveillance -- most of which already exists. No opposition is tolerated, and with zero privacy, none can be organised. Apologists carry on saying "If you don't resist, you've nothing to fear", and eagerly denounce traitors to our new robot overlords, for fear that if we make too much trouble for them, they'll find it inefficient to keep us around. To the AI, people are like our blood cells are to us: little machines that form a part of how we work, and important only so far as they serve that end.