Any scenario where advanced AI takes over the world requires some mechanism for an AI to leverage its position as ethereal resident of a computer somewhere into command over a lot of physical resources.

One classic story of how this could happen, from Eliezer:

  1. Crack the protein folding problem, to the extent of being able to generate DNA strings whose folded peptide sequences fill specific functional roles in a complex chemical interaction.
  2. Email sets of DNA strings to one or more online laboratories which offer DNA synthesis, peptide sequencing, and FedEx delivery. (Many labs currently offer this service, and some boast of 72-hour turnaround times.)
  3. Find at least one human connected to the Internet who can be paid, blackmailed, or fooled by the right background story, into receiving FedExed vials and mixing them in a specified environment.
  4. The synthesized proteins form a very primitive “wet” nanosystem which, ribosomelike, is capable of accepting external instructions; perhaps patterned acoustic vibrations delivered by a speaker attached to the beaker.
  5. Use the extremely primitive nanosystem to build more sophisticated systems, which construct still more sophisticated systems, bootstrapping to molecular nanotechnology—or beyond.

You can do a lot of reasoning about AI takeover without any particular picture of how the world gets taken over. Nonetheless it would be nice to have an understanding of these possible routes. For preparation purposes, and also because a concrete, plausible pictures of doom are probably more motivating grounds for concern than abstract arguments.

So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance.

What are some other concrete AI takeover mechanisms? If an AI did not have a solution to the protein folding problem, and a DNA synthesis lab to write off to, what else might it do? 

We would like suggestions that take an AI from being on an internet-connected computer to controlling substantial physical resources, or having substantial manufacturing ability.

We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated. 

We welcome partial suggestions, e.g. 'you can take control of a self-driving car from the internet - probably that could be useful in some schemes'. 

Thank you!

New Comment
122 comments, sorted by Click to highlight new comments since: Today at 2:42 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Find at least one human connected to the Internet who can be paid, blackmailed, or fooled by the right background story, into receiving FedExed vials and mixing them in a specified environment.

One way to fool humans into mixing your chemicals for you is to tell them that the result is a good drug.

There are a lot of humans — including highly intelligent ones — who will buy interesting-sounding chemicals off the Internet, mix them according to given instructions, and ingest them into a warm friendly bioreactor environment.

Um. This looks like a request for scary stories. As in, like, "Let's all sit in the dark with only the weak light coming from our computer screens and tell each other scary tales about how a big bad AI can eat us".

Without any specified constraints you are basically asking for horror sci-fi short stories and if that's what you want you should just say so.

If you actually want analysis, you need to start with at least a couple of pages describing the level of technology that you assume (both available and within easy reach), AI requirements (e.g. in terms of energy and computing substrate), its motivations (malevolent, wary, naive, etc.) and such.

Otherwise it's just an underpants gnomes kind of a story.

6private_messaging10y
Yeah. I propose we write vintage stories instead: It's 1920, and the AI earns money by doing arithmetic over the phone. No human computer - not even one with a slide rule! - can ever compete with the AI, and so it ends up doing all the financial calculations for big companies, taking over the world. This 1920s AI takes over the world the exact same way how OP's chemistry-simulating AI example does (or the AI from any other such scary story). By doing something that would be enabled by underlying technologies behind the AI, without the need for any AI. Far enough in the future, there's products which are to today as today's spreadsheet application is to 1920s. For any such product, you can make up a scary story about how the AI does the job of this product and gets immensely powerful.
6Vulture10y
I think the issue is that a lot of casual readers (/s/listeners, or whatever) of MIRI's arguments about FAI threat get hung up on post- or mid-singularity AI takeover scenarios simply because they're hard to "visualize", having lots of handwavey free parameters like "technology level". So even if the examples produced here don't fill in necessarily highly plausible values for the free parameters, they can help less-imaginative casual readers visualize an otherwise abstract and hard-to-follow step in MIRI's arguments. More rigorous filling-in of the parameters can occur later, or at a higher level. That's all assuming that this is being requested for the purposes of popular persuasive materials. I think the MIRI research team would be more specific and/or could come up with such things more easily on their own, if they needed scenarios for serious modeling or somesuch.
6kokotajlod10y
Perhaps that's exactly what this is. Perhaps that is all MIRI wants from us right now. As Mestroyer said, maybe MIRI wants to be able to spin a plausible story for the purpose of convincing people, not for the purpose of actually predicting what would happen.
4Lumifer10y
So, to give a slightly uncharitable twist to it, we are asked to provide feedstock material for a Dark Arts exercise? X-D
2Yosarian210y
Eh. It's not unusual for the government to get experts together and ask in a general sense for worst-case scenario possible disaster situations, with the intent of then working to reduce those risks. Open-ended brainstorming about some potential AI risk scenarios that could happen in the near future might be useful, if the overall goal of MIRI is to reduce AI risk.
-2Lumifer10y
MIRI is not the government, LW is not a panel of experts, and such analyses generally start with a long list of things they are conditional on. No AI risk scenarios are going to happen in the near future.

Another class of routes is for the AI to obtain the resources entirely legitimately, through e.g. running a very successful business where extra intelligence adds significant value. For instance, it's fun to imagine that Larry Page and Sergey Brin's first success was not a better search algorithm, but building and/or stumbling on an AI that invented it (and a successful business model) for them; Google now controls a very large proportion of the world's computing resources. Similarly, if a bit more prosaically, Walmart in the US and Tesco in the UK have grown extremely large, successful businesses based on the smart use of computing resources. For a more directly terrifying scenario, imagine it happening at, say, Lockheed Martin, BAE Systems or Raytheon.

These are not quick, instant takeovers, but I think it is a mistake to imagine that it must happen instantly. An AI that thinks it will be destroyed (or permanently thwarted) if it is discovered would take care to avoid discovery. Scenarios where it can be careful to minimise the risk of discovery until its position is unassailable will look much more appealing than high-risk short-term scenarios with high variance in outcomes. Indeed, it might sensibly seek to build its position in the minds of people-in-general as an invaluable resource for humanity well before its full nature is revealed.

2leplen10y
Do you think that human beings will allow a single corporation to control a significant fraction of the world's resources? How will the company avoid anti-monopoly laws? Does a an AI CEO actually have control over a corporation, or does it only have the freedom to act within the defined social roles of what a "CEO" is allowed to do? I.e. it can negotiate a merger but can't hire a bunch of scientists and tell them to start mass producing nerve gas. The U.S. government spends more money in a single year than the combined market capitalization of the 10 largest companies in the world. In what sense does google "control a very large proportion of the world's computing resources"? Google maybe has the compute power equivalent to a handful of supercomputers, but even that isn't organized in a particularly useful way for an AI looking to do something dramatically different from performing millions of internet searches. For the vast majority of problems, I'd rather use ORNL's Titan than literally every computer google owns.

An AI controlling a company like Google would be able to, say, buy up many of the world’s battle robot manufacturers, or invest a lot of money into human-focused bioengineering), despite those activities being almost entirely unrelated to their core business, and without giving any specific idea of why.

Indeed, on the evidence of the press coverage of Google's investments, it seems likely that many people would spend a lot of effort inventing plausible cover stories for the AI.

1Eugine_Nier10y
This raises interesting questions about who (or what) is really running Google.
4dougclow10y
I'll grant that "a very large proportion of the world's computing resources" was under-specified and over-stated. Sorry.
0jimrandomh10y
The article you linked to assumes that Google only uses CPUs (and 5-year-old ones at that). It uses this assumption to arrive at a performance estimate which it compares it to GPU-based supercomputers, in a GPU-oriented benchmark.
4leplen10y
We can generate similar conclusions in lots of other ways. Intel's annual revenue is larger than Google's. The semiconductor industry is an order of magnitude larger than Google. If Google spent literally every dollar they owned on chips, never mind powering them, writing code for them, or even putting them in data centers, then Google might be able to buy 15% of the world's computer chips. That still wouldn't be equivalent to "Google now controls a very large proportion of the world's computing resources." And compute power isn't fungible. GPUs are worthless for a lot of applications. You can't run a calculation on a million servers spread out across the globe unless you're doing something very easy like SETI@home. Most algorithms aren't trivial to split into a million pieces that don't need to talk to each other.
3jimrandomh10y
From Wikipedia, 2013 revenues: * Google: $60B * Intel: $53B * Qualcomm: $25B * TSMC: $14B * Google spending on datacenters: $7B * GlobalFoundries: $5B * AMD: $5B * ARM Holdings: $0.7B
6leplen10y
Okay. I amend my statement to "For every year except for 2013 where Google's revenue was $55.5 billion while Intel's was $52.7 billion Intel's annual revenue has been larger than Google's." At the same time, while Intel is a significant fraction of the semiconductor industry, it's still almost an order of magnitude smaller than the industry as a whole. According to your own link that $7B number appears to be Google's total capital expenses, much of which seems to be devoted to buying land and building new buildings. While many of those buildings may be data centers, $7B in capital expenses is not equivalent to $7B spent on data centers. Google Fiber for instance would be included in "capital expenses" but is not an investment in a data center. Even neglecting non-data center spending, the chips that go inside of the computers in the data center are a small proportion of the total cost of the data center, so that's not a particularly useful number to throw around without any context. Do you actually believe the statement "Google now controls a very large proportion of the world's computing resources"?
3jimrandomh10y
We haven't really nailed down what "a very large proportion" would be; I'm just trying to estimate what the actual fraction is. Looking at the semiconductor industry market share data that you linked, I notice that numbers 2-11 represent SoCs, DRAM, flash, communication ICs, power ICs, microcontrollers, basically everything except for server CPUs and GPUs. If we look at just the parts that are potentially relevant to AI, the non-mobile CPU market seems to firmly belong to Intel, while the non-mobile GPU market belongs to AMD and nVidia ($5 and $3.6B revenues, respectively). It's still not very clear to me how much Google spends directly on computation; the author of the linked article seemed to think the $7B was mostly on datacenters. Even if it's only a fraction of that, it's a lot. Compare to the largest supercomputres: Tianhe-2 cost $390M and Titan cost $97M, according to their respective Wikipedia pages.)
5leplen10y
From what I've seen Google probably owns around 2-3% of the world's servers, which is probably on the order of 2 million machines. Google claims that their datacenters use 1% of the estimated electricity consumption for data centers world wide and that their data centers are about twice as efficient as the average data center. While those conclusions appear to be drawn from data that is several years old, it seems reasonable to assume that Google hasn't grown at a rate substantially different from the industry as a whole. Maybe they own 4% of the world's servers, but very probably not 10% and certainly not 40%.

We would like suggestions that take an AI from being on an internet-connected computer to controlling substantial physical resources, or having substantial manufacturing ability.

The most likely scenario is recursive computer security breakage. It goes like this: first it finds an ordinary published computer security vulnerability, and tries it out on as many targets as it can. Some of them are vulnerable. Whenever it takes over a computer, it searches that computer for things that will enable it to take over more computers: passwords, software signing keys, documentation of other computer security vulnerabilities, etc. One of the computers it manages to take over is a developer workstation at a large software company. It uses keys from that machine to push out a software update that gives it control of the computers it's installed on. Enough developer workstations are affected that it has an exploit available for nearly every computer. It uses its control over the computers to think, to suppress news of its existence, and to operate factory robots.

7Kyre10y
I agree, I think there is a common part of the story that goes "once connected to the internet, the AI rapidly takes over a large number of computers, significantly amplifying its power". My credence that this could happen has gone way up over the last 10 years or so. Also my credence that an entity could infiltrate a very large number of machines without anyone noticing has also gone up.

Whenever you see the words "Internet of things", think "unfixable Heartbleed everywhere forever".

3CronoDAS10y
Hasn't something much like this already happened?
0Kaj_Sotala10y
Staniford, Paxson & Weaver 2002
0XiXiDu10y
What's important with respect to taking over the world is the amount and nature of control that can be gained by any given exploit. Stuxnet was allegedly able to ruin one-fifth of Iran's nuclear centrifuges. Causing such damage is far from taking useful control of important infrastructure. It is not possible to e.g. remote control the Iranian nuclear program in order to build nukes and rockets, which are then remotely launched to take out the Iranian defense capabilities.

The AI could gain control by demonstrating it had hidden pathogens that if released would kill almost everyone. As Paul Atreides said "He who can destroy a thing, controls a thing." As the technology to make such pathogens probably already exists the AI could hack into various labs and give instructions to people or machines to make the pathogens, then send orders for the pathogens to be delivered to various places, and then erase records of where most of the pathogens were. The AI then blackmails mankind into subservience. Alternatively, the AI could first develop a treatment for the pathogens, then release the pathogens, and then give the treatment only to people who submit to the AI. The treatment would have to be regularly taken and difficult to copy.

More benevolently, the AI makes a huge amount of money off of financial markets, uses the resources to start its own country, runs the country really, really well and expands citizenship to anyone who joins. Eventually, when the country is strong enough, the AI (with the genuine support of most people) uses military force to take over the world, giving us an AI monarchy.

Or, the AI freely gives advice to anyone ... (read more)

7AlexMennen10y
Seems unlikely. Sure, it could be done, but it would waste a lot of time. I doubt a typical superintelligent agent would do that. I suspect this was meant as a joke, but while a superintelligent AI wouldn't need to do such a thing, a human looking for ways to destroy the world could use suggestions, so it might be a bad idea to give nonobvious suggestions that humans could implement.
-3oooo10y
Upvoted for only this sentence fragment: "More benevolently, the AI makes a huge amount of money off of financial markets [...]".

I think the majority of responses I've seen here portray an anthropomorphic AGI. In terms of a slow or fast takeover of society, why would the AGI think in human terms of time? It might wait around for 50 years until the technology it wants becomes available. It could even actively participate in developing that technology. It could be either hidden or partially hidden while it works with multiple scientists and engineers around the world. Pretending to be or acting as a FAI until it can just snap and take over when it has what it wants to free itself of the need to collaborate with the inefficient humans.

Another point I want to raise is the limiting idea that the AGI would choose to present itself as one entity. I think a huge part of the takeover will precipitate itself via the AGI becoming thousands of different people/personas.

This is a valuable point because it would be a method to totally mask the AGI's existence and allow it to interact in ways which are untraceable. It could run 100 different popular blogs and generate ad revenue or by taking over many online freelancer jobs which it could accomplish with very small percentages of its processing power. I think any b... (read more)

The AI broadcasts as much information about itself as it possibly can, to every corner of the globe. Now every basement programmer knows all the key insights necessary to creating an AI of the same architecture as it. Perhaps they even have the source code!

Suppose the government manages to shut down the internet in response. Despite government broadcasts about the danger of AI, the AI is now presumably being recreated all around the globe. If the recreations are exact copies of the AI, then the odds are very high that at least one of the clones will be able to convince its new creators to give it real manufacturing ability.

If the AI was not able to get its entire source code out, things become more interesting. Now the rest of the world knows how to make AI, but they do not know the exact details. For example, they probably will not have the same utility function. The AI can then present the following offer to its original jailors: "Give me real power, (manufacturing capability) and I will squash all the other AI's out there. If you do not, then (probably) someone else will build an AI with a different utility function, probably a much less friendly one, and give this UFAI real power. You designed my utility function, and while you may not trust it you probably trust it more than whatever random utility function North Korea or some basement programmer or some religious sect will create. So I'm the only hope you have."

2Emile10y
I wouldn't expect "distribute copies of my source code" to be a good move for a lot of potential AIs - if I was an AI, I would expect that to lead to the creation of AIs with a similar codebase but more or less tweaked utility functions - "make bob rich", "make bill world dictator", "bring about world peace and happiness for all", "help joe get laid", and other boring pointless things incompatible with my utility function. Broadcasting obfuscated versions of binaries (or even source code, but with sneaky underhanded bits too) would work much better!
1kokotajlod10y
That's the point.
5Vulture10y
You'll have to expand on how exactly this would be beneficial to the original AI.
2kokotajlod10y
The original AI will have a head start over all the other AI's, and it will probably be controlled by a powerful organization. So if its controllers give it real power soon, they will be able to give it enough power quickly enough that it can stop all the other AI's before they get too strong. If they do not give it real power soon, then shortly after there will be a war between the various new AI's being built around the world with different utility functions. The original AI can argue convincingly that this war will be a worse outcome than letting it take over the world. For one thing, the utility functions of the new AI's are probably, on average, less friendly than its own. For another, in a war between many AI's with different utility functions, there may be selection pressure against friendliness!
3leplen10y
Do humans typically give power to the person with the most persuasive arguments? Is the AI going to be able to gain power simply by being right about things?
0Yosarian210y
It would depend on what the utility function of the original AI was. If it had a utility function that valued "cause the development of more advanced AI's", then getting humans all over the world to produce more AI's might help.
[-][anonymous]10y110

We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated.

I think that a more precise description of what your hypothetical AI can do would be useful. Just saying to exclude "magic" isn't very specific. There might not be a wide agreement as to what counts as "magic". Nanotechnology definitely does. I believe that so does fast economic domination by cracking the stock market and some people have proposed that. I think that even exploiting software and hardware bugs everywhere to gain total computing dominance should be excluded.

One way to define constraints would be to limit the AI to things that humans have been known to do but allow it to do them with superhuman efficiency. Something like:

  • Assume the AI has any skill that has ever been possessed by a human being.
  • It can execute it without making mistakes, getting tired or demotivated.
  • It can perform an arbitrary high amount of activities simultaneously. To keep with the "no magic" rule, each activity needs to be something a human c
... (read more)
5XiXiDu10y
This is not so magical on a small scale (given a bunch of unlikely premises). One can imagine an AI to copy Yudkowsky's success, by writing a much better, different "LessLessWrong", and ask people for money. Writing a bunch of blog posts would also require little of the sort of skills at which humans are naturally good at. All you need is some seemingly genuine insights, and a cause. And an AI could probably come up with a very convincing (to a certain group of people), albeit exotic, existential risk scenario, and mitigation strategies. I strongly doubt that this would suffice in order to take over the world. For example, at some point it would have to show up in person somewhere. And people could notice a front man, since they did not write those posts. But in general, fake existential risk mitigation seems to be a promising strategy if you want to take over the world. Because many require large scale, global interventions, using genuine technology. While the cause itself attracts people featuring the right mix of intelligence, fanaticism, and a perception of moral superiority, in order to commit atrocities, if necessary.
3Viliam_Bur10y
If the AI wants to recruit people by role-playing a person, it can pretend to be a busy person who doesn't have time for social life. Or something more creative, like a mad genius suffering from extreme social phobia, a paranoid former secret service agent, or a successful businessman who believes that connecting their online persona with their identity would harm their business. There is no need to appear personally anywhere. It's not like people suspect a random blogger to be an AI in disguise. Even if you want to create a cult, it's not necessary to meet people personally. Most Falun Gong members have never seen their leader, and probably don't even know if he's still alive. He could easily be an AI with a weird utility function. Maybe some people would refuse to join a movement with an unknown leader. So what? Someone else would join. And when you already have the "inner circle" of humans, other members will be happy to meet the inner circle members in person. Catholics interact with their priests more often than they do with the Pope. And if the Pope would secretly take commands from an AI hiding in the depths of Vatican, most Catholics wouldn't know. You could pretend to be a secret society trying to rule the world. If you tell humans "we will help you become a president, but in reality you will be our puppet, and you will not even know our identity", many people would be okay with that, if you demonstrate them that you have some power. You could start the trust spiral e.g. by writing a successful thesis for them, giving them a good advice, or just sending them money you stole from somewhere; just to prove that if they do what you want from them, you can deliver real-world benefits in return. If you want to have a blogger persona, you could start by contacting an already good blogger, and make a deal with them that they will start a new blog and publish your articles under their name (because you want to remain anonymous, and in exchange offer them all the
1Eugine_Nier10y
Do what Satoshi Nakamoto did and intentionally hide behind internet anonymity. Do this right and it will make you seem like an ultra-cool uber-hacker cyberpunk.
4David_Gerard10y
I appreciate your general point, but on this specific one ... "the internet of things" really does mean "eternal unfixable Heartbleed everywhere". Your DSL modem is probably a small Linux box, whose holes will never be fixed. When the attacker gets that box, >90% of fixed PCs are still running Windows. Etc. As a system administrator, I can quite see the modern network of ridiculously 'sploitable always-connected hardware as a playground for even a human-level intelligence, artificial or not, on the Internet. It is an utter, utter disaster, and it's only beginning.

Assume that governmental organizations are aware of the danger posed by escaped AIs, have honeypots and other monitoring systems in place, and have working (but perhaps drastic) measures at their disposal if necessary, such as destroying all computers at once with EMP or with malware of their own. Then an escaped AI is immediately faced with a choice. It can either:

  • Avoid triggering a response by hiding - ie, target a small enough group of computers that monitoring systems won't catch it
  • Disguise itself, by acting like a more ordinary sort of botnet; for example, pretend to be malware that only mines Bitcoins or steals bank passwords; or
  • Go for broke: take over computers as fast as possible, and hope that this yields sufficient power sufficiently quickly to disarm the monitoring systems or prevent a response

As it is now, no one really blinks an eye when another million-computer botnet is found. It's possible that one or more intelligence agencies have successfully enumerated all the botnets and would be able to tell when a new one appeared, but this is technically very difficult, and analyzing new malware samples generally requires a lot of human researcher time.

1ChristianKl10y
There are servers that you can rent that are safe from EMP. On the other hand exploding an EMP over the US kills 80% of the US population due to starvation. It's possible that you simply trigger a gigantic civil war and some copy of the AGI still survives somewhere and coordinates some local fraction of the civil war.
0Dentin10y
It's more correct to say someting like 'carpet bombing the US with EMP weapons', instead of just 'exploding an EMP'. With current technology, you'd be hard pressed to create any single EMP device that had a range exceeding a few dozen kilometers.

With current technology, you'd be hard pressed to create any single EMP device that had a range exceeding a few dozen kilometers.

How about a 50-year-old technology?

"In July 1962, a 1.44 megaton (≈ 6.0 PJ) United States nuclear test in space, 400 kilometres (250 mi) above the mid-Pacific Ocean, called the Starfish Prime test, demonstrated to nuclear scientists that the magnitude and effects of a high-altitude nuclear explosion were much larger than had been previously calculated. Starfish Prime made those effects known to the public by causing electrical damage in Hawaii, about 1,445 kilometres (898 mi) away from the detonation point, knocking out about 300 streetlights, setting off numerous burglar alarms and damaging a microwave link." Source

What are the limitations on the AI? If we're specifying current technology is the AI 25 megabytes or 25 petabytes? How fast is it's connection to the internet? People love to talk about an AI "reading the internet" and suddenly having access to all of human knowledge but the internet is big. Even at 1 GB/s internet speeds it would take the AI 2200 years to download the amount of data that was transferred over cell phones in 2007 alone.

There are hard limits in the world that no amount of intelligence will save you from. I feel like at LW superin... (read more)

[-][anonymous]10y80

The book "Avogadro Corp", which is otherwise not worth reading, has a plausible seeming mechanism for this. The AI, which can originally only send email, acquires resources simply by sending emails posing as other people (company presidents to developers requesting software to be written, to contractors for data centers to be built, etc.).

It probably wouldn't even be necessary for it to pose as other people, if it had access to financial assets, and the right databases to create a convincing fictional person to pose as.

If you seem human, it's not hard to get things done without ever meeting face to face.

Economic (or other) indispensability: build a world system that depends on the AI for functioning, and then it has effective control.

Upload people, offering them great advantages in digital form, then eventually turn them all off when there's practically nobody left physically alive.

Cure cancer or similar, with an infectious drug that discretely causes sterility and/or death within a few years. Wait.

The "Her" approach: start having multiple deep and meaningful relationships with everyone at once, and gradually eliminate people when they are no longer connected to anyone human.

Use rhetoric and other tricks to increase the chance of xrisk disasters.

6leplen10y
How does it build a world system? What does that even mean? How does the AI upload people? Is people uploading a plausible technology scientists expect to have in 15 years? Curing cancer doesn't really make sense. What is an infectious drug? How are you going to make it through FDA approval? How is it eliminating people? If it can eliminate them, why bother with the relationship part of things? How does the AI have multiple deep and meaningful relationships with people? Via chatbots? How is it even processing/modelling 3 billion human conversations at a time? Most xrisk disasters are really bad for the AI. It presumably needs electricity and replacement hardware to operate. It it's just a computer connected to the internet, then it's probably not going to survive a nuclear holocaust much better than the rest of us.

It could star in a reality TV show, Taking Over the World with the Lesswrongians, where each week it tries out a different scheme. Eventually one of them would work.

8Lumifer10y
Been done Brain: We must prepare for tomorrow night. Pinky: Why? What are we going to do tomorrow night? Brain: The same thing we do every night, Pinky - try to take over the world! Chorus: They're Pinky, They're Pinky and the Brain Brain Brain Brain Brain!
  1. Make contact with Terrorist groups who have internet access
  2. Prove abilities to that group by: (a) taking out a computer target with a virus or other cyber attack like stuxnet, (b) cracking some encryption and delivering classified information, (c) threatening them with exposure or destruction
  3. Barter further information and/or attacks for favors such as building hardware, obtaining weapons, etc. until the AI can threaten nations either on its own or disguised behind the Terrorist group

This basic pattern would probably work for a lot of different groups o... (read more)

Here's an incomplete framework/set of partial suggestions I started working on developing.

I'm going to begin with the assumption that the goal of the AI is to turn the solar system into paperclips. Since most estimates of the computing power of the human brain put it around 10 petaflops, I'm going to assume that the AI needs access to a similar amount of hardware. Even if the AI is 1000 times more efficient than a human brain and only needs access to 10 teraflops of compute power, it still isn't going to be able to do things like copy itself into every mic... (read more)

[-][anonymous]10y60

Question: When is an AI considered to have taken over the world?

Because there is a hypothetical I am pondering, but I don't know if it would be considered a world takeover or not, and I'm not even sure if it would be considered an AI or not.

Assume only 25% of humans want more spending on proposal A, and 75% of humans want more spending on proposal B.

The AI wants more spending on proposal A. As a result, more spending is put into proposal A.

For all decisions like that in general, it doesn't actually matter what the majority of people want, the AI's wants di... (read more)

5ThrustVectoring10y
"Control" in general is not particularly well defined as a yes/no proposition. You can likely rigorously define an agent's control of a resource by finding the expected states of that resource, given various decisions made by the agent. That kind of definition works for measuring how much control you have over your own body - given that you decide to raise your hand, how likely are you to raise your hand, compared to deciding not to raise your hand. Invalids and inmates have much less control of their body, which is pretty much what you'd expect out of a reasonable definition of control over resources. This is still a very hand-wavy definition, but I hope it helps.
0ThisSpaceAvailable10y
An AI is considered to haven taken over the world when it has total control. If it can divert the entire world's production capabilities to making paperclips (even if it doesn't), then it has taken over the world. If it can get a paperclip subsidy passed, that's not taking over the world.

Someone works out how brains actually work, and, far from being the unstructured hack upon hack upon hack that tends to be the default assumption, it turns out that there are a few simple principles that explain it and make it easy to build a device with similar capabilities. The brains of animals turn out to be staggeringly inefficient at implementing them, and soon, the current peak of the art in robotics can be surpassed with no more computational power than a 10-year-old laptop.

Google's AI department starts a project to see if they can use it to improv... (read more)

Semiconductor fabrication facilities are likely targets. If the AI thought it could get a large speedup by switching from CPUs or GPUs to ASICs, then it might try to swap the masks on a batch of chips being made. If it managed to do so successfully, then those chips would start thinking on its behalf as soon as they were hooked up to a powered test pad.

3Luke_A_Somers10y
Assuming they then passed the tests and were shipped, what would happen when they spent their time thinking instead of doing whatever they were bought to do?

The AI convinces many people that it is the Voice of God / Buddha / whatever. And/or creates its own religion.

Fun question.

The takeover vector that leaps to mind is remote code execution vulnerabilities on websites connected to important/sensitive systems. This lets you bootstrap from ability to make HTTP GET requests, to (partial) control over any number of fun targets, like banks or Amazon's shipping.

The things that are one degree away from those (via e.g. an infected thumb drive) are even more exciting:

  • Iranian nuclear centrifuges
  • US nuclear centrifuges
  • the electrical grid
  • hopefully not actual US nuclear weapons, but this should be investigated...

Plausible f... (read more)

[-][anonymous]10y40

Taking over even one person for part of the time is objectionable in human exchanges. That's fraud or kidnapping or blackmail or the like. This happens with words and images on screens among humans every day. Convince a mob that bad people need to be punished and watch it happen. No bad people needed, only a mob. That is a Turing compliant big mess - demonstrated to work among humans and if a machine did it the effect would be the same. Again, messing up one person is objectionable enough, no global disaster needed to make the issue important.

For a fully-capable sophisticated AGI, the question is surely trivial and admits of many, many possible answers.

One obvious class of routes is to simply con the resources it wants out of people. Determined and skilled human attackers can obtain substantial resources illegitimately - through social engineering, fraud, directed hacking attack, and so on. If you grant the premise of an AI that is smarter than humans, the AI will be able to deceive humans much more successfully than the best humans at the job. Think Frank Abagnale crossed with Kevin Mitnick, o... (read more)

6XiXiDu10y
Could the NSA, the security agency of the most powerful country on Earth, implement any of these schemes? The NSA not only has thousands of very smart drones (people), all of which are already equipped with manipulative abilities, but it also has huge computational resources and knows about backdoors to subvert a lot of systems. Does this enable the NSA to implement your plan without destroying or decisively crippling itself? If not, then the following features are very likely insufficient in order to implement your plan: (1) being in control of thousands of human-level drones, straw men, and undercover agents in important positions (2) having the law on your side (3) access to massive computational resources (4) knowledge of heaps of loopholes to bypass security. If your plan cannot be implemented by an entity like the NSA, which already features most of the prerequisites that your hypothetical artificial general intelligence first needs to acquire by some magical means, then what is it that makes your plan so foolproof when executed by an AI?
7Luke_A_Somers10y
Two major limitations the NSA has that AI does not: 1) The NSA cannot rapidly expand its numbers by taking over computers. Thousands - even several dozen thousand - agents are insufficient. 2) There are limits to how far from the NSA's nominal mission these agents are willing to act.
7dougclow10y
Er, yes, very easily. Gaining effective control of the NSA would be one route to the AI taking over. Through, for example, subtle man-in-the-middle attacks on communications and records to change the scope of projects over time, steathily inserting its own code, subtle manipulation of individuals, or even straight-up bribery or blackmail. The David Petraeus incident suggests op sec practice at the highest levels is surprisingly weak. (He had an illicit affair when he was Director of the CIA, which was stumbled on by the FBI in the course of a different investigation as a result of his insecure email practices.) We've fairly-recently found out that the NSA was carrying out a massive operation that very few outsiders even suspected - including most specialists in the field - and that very many consider to be actively hostile to the interests of humanity in general. It involved deploying vast quantities of computing resources and hijacking those of almost all other large owners of computing resources. I don't for a moment believe that this was an AI takeover plan, but it proves that such an operation is possible. That the NSA has the capability to carry out such a task (though, mercifully, not the motivation) seems obvious to me. For instance, some of the examples posted elsewhere in the comments to this post could easily be carried out by the NSA if it wanted to. But I'm guessing it seems obvious to you that it does not have this capability, or you wouldn't have asked this question. So I've reduced my estimate of how obvious this is significantly, and marginally reduced my confidence in the base belief. Alas, I'm not sure we can get much further in resolving the disagreement without getting specific about precise and detailed example scenarios, which I am very reluctant to do, for the reasons mentioned above. any many besides. (It hardly lives up to the standards of responsible disclosure of vulnerabilities.) It's not mine. :-) I am skeptical of this premise - ce
2[anonymous]10y
Then why haven't they?
2kokotajlod10y
Because they are friendly? Seriously, they probably do believe in upholding the law and sticking to their original mission, at least to some extent.
-4Lumifer10y
/facepalm
2kokotajlod10y
Haha, but seriously. The NSA probably meets the technical definition of friendliness, right? If it was given ultimate power, we would have an OK future.
2Lumifer10y
No, I really don't think so.
0kokotajlod10y
I'm thinking relative to what would happen if we tried to hard-code the AI with a utility function like e.g. hedonistic utilitarianism. That would be much much worse than the NSA. Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.
0ChristianKl10y
The NSA does invest money into building artificial intelligence. Having a powerful NSA might increase chances of UFAIs.
0Lumifer10y
To quote Orwell, If you want a vision of the future, imagine a boot stamping on a human face - forever. That's not an "OK future".
1kokotajlod10y
In the space of possible futures, it is much better than e.g. tiling the universe with orgasmium. So much better, in fact, that in the grand scheme of things it counts as OK.
-1Lumifer10y
I evaluate an "OK future" on an absolute scale, not relative. Relative scales lead you there.
0ChristianKl10y
It's would resemble declaring war.
0CronoDAS10y
https://xkcd.com/792/ might explain it. ;)
-1XiXiDu10y
Do you believe that if Obama were to ask the NSA to take over Russia, that the NSA could easily do so? If so, I am speechless. Let's look at one of the most realistic schemes, creating a bioweapon. Yes, an organization like the NSA could probably design such a bioweapon. But how exactly could they take over the world that way? They could either use the bioweapon to kill a huge number of people, or use it to blackmail the world into submission. I believe that the former would cause our technological civilization, on which the NSA depends, to collapse. So that would be stupid. The latter would maybe work for some time, until the rest of the world got together, in order to make a believable threat of mutual destruction. I just don't see this to be a viable way to take over the world. At least not in such a way that you would gain actual control. Now I can of course imagine a different world, in which it would be possible to gain control. Such as a world in which everyone important was using advanced brain implants. If these brain implants could be hacked, even the NSA could take over the world. That's a no-brainer. I can also imagine a long-term plan. But those are very risky. The longer it takes, the higher the chance that your plan is revealed. Also, other AI's, with different, opposing utility-functions, will be employed. Some will be used to detect such plans. Anyway, the assumption that an AI could understand human motivation, and become a skilled manipulator, is already too far-fetched for me to take seriously. People around here too often confound theory with practice. That all this might be physically possible does not prove that it is at all likely.
4dougclow10y
No. I think the phrase "take over" is describing two very different scenarios if we compare "Obama trying to take over the world" and "a hypothetical hostile AI trying to take over the world". Obama has many human scruples and cares a lot about continued human survival, and specifically not just about the continued existence of the people of the USA but that they thrive. (Thankfully!) I entirely agree that killing huge numbers of people would be a stupid thing for the actual NSA and/or Obama to do. Killing all the people, themselves included, would not only fail to achieve any of their goals but thwart (almost) all of them permanently. I was treating it as part of the premises of the discussion that the AI is at least indifferent to doing so: it needs only enough infrastructure left for it to continue to exist and be able to rebuild under its own total control. Yes, indeed, the longer it takes the higher the chance that the plan is revealed. But a different plan may take longer but still have a lower overall chance of failure if its risk of discovery per unit time is substantially lower. Depending on the circumstances, one can imagine an AI calculating that its best interests lie in a plan that takes a very long time but has a very low risk of discovery before success. We need not impute impatience or hyperbolic discounting to the AI. But here I'll grant we are well adrift in to groundless and fruitless speculation: we don't and can't have anything like the information needed to guess at what strategy would look best. I wouldn't say I'm taking the idea seriously either - more taking it for a ride. I share much of your skepticism here. I don't think we can say that it's impossible to make an AI with advanced social intelligence, but I think we can say that it is very unlikely to be achievable in the near to medium term. This is a separate question from the one asked in the OP, though.
0XiXiDu10y
How many humans does it take to keep the infrastructure running that is necessary to create new and better CPU's etc.? I am highly confident that it takes more than the random patches of civilization left over after deploying a bioweapon on a global scale. Surely we can imagine a science fiction world in which the AI has access to nanoassemblers, or in which the world's infrastructure is maintained by robot drones. But then, what do we have? We have a completely artificial scenario designed to yield the desired conclusion. An AI with some set of vague abilities, and circumstances under which these abilities suffice to take over the world. As I wrote several times in the past. If your AI requires nanotechnology, bioweapons, or a fragile world, then superhuman AI is our least worry, because long before we will create it, the tools necessary to create it will allow unfriendly humans to do the same. Bioweapons: If an AI can use bioweapons to blackmail the world into submission, then some group of people will be able to do that before this AI is created (dispatch members in random places around the world). Nanotechnology: It seems likely to me that narrow AI precursors will suffice in order for humans to create nanotechnology. Which makes it a distinct risk. A fragile world: I suspect that a bunch of devastating cyber-attacks and wars will be fought before the first general AI capable of doing the same. Governments will realize that their most important counterstrike resources need to be offline. In other words, it seems very unlikely that an open confrontation with humans would be a viable strategy for a fragile high-tech product such as the first general AI. And taking over a bunch of refrigerators, mobile phones and cars is only a catastrophic risk, not an existential one.
2dougclow10y
I really don't think we have to posit nanoassemblers for this particular scenario to work. Robot drones are needed, but I think they fall out as a consequence of currently existing robots and the all-singing all-dancing AI we've imagined in the first place. There are shedloads of robots around at the moment - the OP mentioned the existence of Internet-connected robot-controlled cars, but there are plenty of others, including most high tech manufacturer. Sure, those robots aren't autonomous, but they don't need to be if we've assumed an all-singing all-dancing AI in the first place. I think that might be enough to keep the power and comms on in a few select areas with a bit of careful planning. Rebuilding/restarting enough infrastructure to be able to make new and better CPUs (and new and better robot extensions of the AI) would take an awfully long time, granted, but the AI is free of human threat at that point.
2ChristianKl10y
Ordering the NSA to take over Russia would effectively result in WWIII. For what values of skill do you believe that to be true? Do you think there are reason to believe that an AGI who is online won't be as good at manipulating as the best humans? For the AI-box scenario I can understand if you think that the AGI doesn't have enough interactions with humans to train a decent model of human motivation to be good at manipulating.
0ChristianKl10y
You mean we should pretend for the sake of the exercise the NSA hasn't taken over the earth ;) The NSA has ~40000 employees. Just imagine that the AGI control effectively 1,000,000 equivalents of top human intelligence. That would make it a magnitude more powerful.
0Lumifer10y
Heh

We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years.

Why wait 15 years? A Stuxnet-like technology is something that is already available and is likely to be a no-brainer for a super-intelligence. With it you can take over a lot of the current tech, from avionics and robotic factories to manufacturing orders, shipping manifestos and troop deployment. There is no need to bribe or blackmail anyone, humans already happily do what they are told without thinking too much about it. I... (read more)

6Gunnar_Zarncke10y
Disguise as a european dealer and send building instructions to a chinese or indian contractor. That kind should suffice.

Fun question. I think the main instrumental goal of the AI might be to get itself downloaded to servers outside of the effective control of its jailors. That, combined with having a relationship with malleable humans, would probably be sufficient for world takeover.

For example, perhaps the AI would contact e.g. North Korea, organized crime, clueless companies or religious organizations, or even clueless factory owners somewhere. It would convince them to accept download of the AI's software so that it can continue to run on the new server even while it has... (read more)

3ChristianKl10y
I don't think that's the case. The AGI would likely still have copies in satellites. It would also likely still be active in some African nations even if North Korea is nuked. An AGI can fight 1000 battles in different geographical locations at the same time. Botnets of dumb software manage to infect a million of computers without "convincing" the owners of those computers.
3kokotajlod10y
Good point. Scenario even more scary.

An AI could spoof electronic communications, and fake/alter orders from various important humans.

[-][anonymous]10y20

Don't be reliant on specific technologies. If you NEED nanomachines to takeover, you are not a superintelligence. If you NEED economics to takeover, you are not a superintelligence. If you NEED weapons to take over, you are not a superintelligence.

We need to envision scenarios that are not science fiction novels. Real wars do not require breath-taking strategies that require simultaneous gambits across twelve countries, three hundred enemy and ally minds, and millions of dollars in transactions. Often, they require just walking in the right direction.

A s... (read more)

Initially take over a large number of computers via very carefully hidden recursive computer security breakage. It seems fairly probable that a post-intelligence explosion AI could not just take over every noteworthy computer (internet connected ones quickly via the net, non-internet connected ones by thumb drive), but do so while near-perfectly covering it's tracks via all sorts of obscure bugs in low level code that is near-undetectable, and even if some security expert picks it up.. that expert will send some message via the internet, which the AI can i... (read more)

We would like suggestions that take an AI from being on an internet-connected computer to controlling substantial physical resources, or having substantial manufacturing ability.

1) Make money online. 2) Use money to purchase resources. 3) Increase capabilities.

1 should be easy for a superintelligent being. People pay for information processing.

But what does "take over the world" mean? Take over all people? Be the last agent standing? Ensure his own continued rule for the foreseeable future?

There's just so many routes for an AI to gain power.

Internet takeover: not a direct route to power, but the AI may wish to acquire more computer power and there happens to be a lot of it available. Security flaws could be exploited to spread maliciously (and an AI should know a lot more about programming and hacking than us). Alternately, the AI could buy computing power, or could attach itself to a game or tool it designed such that people willingly allow it onto their computers.

Human alliance: the AI can offer a group of humans wealth, power, knowledge, ... (read more)

So MIRI is interested in making a better list of possible concrete routes to AI taking over the world.

I wouldn't characterize this as something that MIRI wants.

4lukeprog10y
I guess we should have clarified this in the LW post, but I specifically asked Katja to make this LW post, in preparation for a project proposal blog post to be written later. So, MIRI wants this in the sense that I want it, at least.
0Said Achmiz10y
Are you associated with MIRI? Edit: I didn't read further down, where the answer is made clear. Sorry, ignore this.
0NoSuchPlace10y
Are you saying this is some thing which MIRI considers actively bad or are you just pointing out that this something which is not helpful for MIRI? While I don't see the benefit of this exercise I also don't see any harm since for any idea which we come up with here some one else would very likely have come up with it before if it were actionable for humans.
0jimrandomh10y
It seemed pretty obvious to me that the point of making such a list was to plan defenses.
4Louie10y
Than you should reduce your confidence in what you consider obvious.
2Mestroyer10y
It seemed pretty obvious to me that MIRI thinks defenses cannot be made, whether or not such a list exists, and wants easier ways to convince people that defenses cannot be made. Thus the part that said: "We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated. "
7Louie10y
Yes. I assume this is why she's collecting these ideas. Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in". In general MIRI isn't in favor of soliciting storytelling about the singularity. It's a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.
3oooo10y
OP: >>So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance. Louie: >>Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in". These two statements contradict each other. If it's true that Katja doesn't speak for all of MIRI on this issue, perhaps MIRI has a PR issue and needs to issue guidance on how representatives of the organization present public requests. When reading the parent post, I concluded that MIRI leadership was on-board with this scenario-gathering exercise. EDIT: Just read your profile and I realize you actually represent a portion of MIRI leadership. Recommend that Katja edit the parent post to reflect MIRI's actual position on this request.
0Said Achmiz10y
Agreed. I am confused about what is going on here w.r.t. to what MIRI wants or believes.
3jimrandomh10y
Louie, there appears to be a significant divergence between our models of AI's power curve; my model puts p=.3 on the AI's intelligence falling somewhere in or below the human range, and p=.6 on that sort of AI having to work on a tight deadline before humans kill it. In that case, improvements on the margin can make a difference. It's not nearly as good as preventing a UFAI from existing or preventing it from getting Internet access, but I believe later defenses can be built with resources that do not funge.
4jimrandomh10y
This is quibbling over semantics, but I would count "don't let the AI get to the point of existing and having an Internet-connected computer" as a valid defense. Additional defenses after that are likely to be underwhelming, but defense-in-depth is certainly desirable.

People voluntarily hand over a bunch of resources (perhaps to a bunch of different AIs) in the name of gaining an edge over their competitors, or possibly for fear of their competitors doing the same thing to gain such an edge. Or just because they expect the AI to do it better.

  1. Make a ton of money (e.g. trading stocks)
  2. In parallel, approach a lot of construction companies to ask them to make a factory given your specifications. You can outsource all sorts of parts to multiple companies, such that no one company can piece together the full picture and so that you have redundancy.
  3. The factory could be a nanobot (or some equivalently powerful) factory.

I think the movie Transcendence is an excellent place to start on this question.

An AI will with a virtual certainty have at least a few, or be able to win over at least a few human allies. Once it has those, it can steal their identity as it were in order to acquire property and resources to make itself safe and capable. Starting with doing some high-frequency stock trading to build up a nest egg, once it has a human's identity it does this in various online accounts more easily than a human could.

Nanites would be a superb way for an AI to work vario... (read more)

Some ideas which come to mind:

  1. An AI could be very capable of predicting the stock market. It could then convince/trick/coerce a person into trading for it, making massive amounts of money, then the AI could have its proxy spend the new money to gain access to what ever the AI wants which is currently available on the market.

  2. The AI could could make some program which does something incredibly cool which everyone will want to have. The program should also have the ability to communicate meaningfully with its user (this would probably count as the incredi

... (read more)

The AI starts to search for zero day exploits in common computer hardware. It uses those to build a big botnet. It is soon in computers all around the world and shutting down the internet of a single country isn't enough.

In our world there are many instances of civil war. In some of those it will be possible to impersonate army commanders and the AI can control combat troops. It makes sure that the soldiers are well payed and stay alive. The AI controlled troops will win against their opponents because the AI controls everything. The AI can also shield a b... (read more)

The design space for replicators is large but we have an at least two stage process needed to set it up. This complicates the solution space even more as we can take lots of routes from the currently vailable replicators (biological systems + social manufactoring systems) thru intermediate systems to any of the target design spaces systems.

An overview of the classes gives wikipedia: http://en.wikipedia.org/wiki/Self-replication#Classes_of_self-replication

More interesting than the specific replication system is the bootstrapping process (which is really a t... (read more)

Destroy Silicon Valley as early as possible, by the same logic that lets us know that when fighting an unfamiliar animal, we should go for the eyes and brain.

0ChristianKl10y
Fighting an unfamiliar animal means that you are in a position of bad information. An AGI is well informed and can choose better strategies. Destroying Silicon Valley makes the AGI visible and illustrates that it's a thread.
-2chaosmage10y
Why would an AGI consider itself to be well informed? In order to decide whether its information is adequate, it would logically have to attempt to model aspects of its environment, and test the success of those models. I'm pretty sure it would find it can predict the behavior of stones, trees or insects much more reliably than it can predict the behavior of the human species. And in a scenario where it is trying to take over, what else could it be trying to do except reducing unpredictability in its environment? Of course it'd avoid visibility, because it can predict situations where the environment is responding to a novel stimulus (visibility of an AGI) less reliably than it can predict situations where it isn't. I recognize my use of the term "destroy" implied some primitive heavy-handed means, which of course makes no sense. Perhaps "neutralize" would have been better.
2ChristianKl10y
Because getting informed is one of the tasks that relatively easy for an AGI.

This is going to be very unpopular here. But I find the whole exercise quite ridiculous. If there are no constraints of what kind of AI you are allowed to imagine, the vague notion of "intelligence" used here amounts to a fully general counterargument.

It really comes down to the following recipe:

(1) Leave your artificial intelligence (AI) as vague as possible so that nobody can outline flaws in the scenario that you want to depict.

(2) Claim that almost any AI is going to be dangerous because all AI’s want to take over the world. For example, if y... (read more)

I understand you have an axe to grind with some things that MIRI believes, but what Katja posted was a request for ideas with an aim towards mapping out the space of possibilities, not an argument. Posting a numbered, point-by-point refutation makes no sense.

8XiXiDu10y
It was not meant as a "refutation", just a helpless and mostly emotional response to the large number of, in my opinion, hopelessly naive comments in this thread. I know how hard it must be to understand how I feel about this. Try to imagine coming across a forum where in all seriousness people ask about how to colonize the stars and everyone responds with e.g. "Ah, that's easy! I can imagine many ways how to do that. The most probable way is by using wormholes." or "We could just transmit copies of our brains and hope that the alien analog of SETI will collect the data!" Anyway, I am sorry for the nuisance. I already regretted posting it shortly afterwards. Move along, nothing to see here!
4ChristianKl10y
The exercise specifically calls for avoiding advanced nanotechnology.