As Luke mentioned, I am in the process of writing "Responses to Catastrophic AGI Risk": A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.

One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty:

Supporting a mutually beneficial legal or economic arrangement is the view that AGIs will need humans. For example, Butler (1863) argues that machines will need us to help them reproduce, and Lucas (1961) suggests that machines could never show Gödelian sentences true, though humans can see them as true.

But I'm certain that I've heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While "academically reputable" sources (papers, books) are preferred, blog posts and websites are fine as well.

Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.

New Comment
83 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
  • Scientific specimens, to better understand what alien intelligent life might be like; dissecting and scanning the brains and bodies of existing humans seems like it would preserve more information, but further scientific studies could involve the creation of new humans or copies of old ones
  • Trading goods, in case of future encounters with alien intelligences that are interested in or care about humans; one could try to fake this by dissecting the humans and then only restoring them if aliens are encountered, but perfect lies might be suspicious (e.g. if one keeps detailed records it's hard to construct a seamless lie, and destroying records is suspicious)
  • Concern that they are in a stage-managed virtual environment (they are in a computer simulation, or alien probes exist in the solar system and have been concealing themselves as well as observing), and that preserving the humans brings at least a tiny probability of being rewarded enough to make it worthwhile; Vinge talks about the 'meta-golden-rule,' and Moravec and Bostrom touch on related points

Storing much of humanity (or at least detailed scans and blueprints) seems cheap relative to the resources of the Solar System, but ... (read more)

2ChrisHallquist
Seconding Penrose. Depending on how broadly you want to cast your net, you could include a sampling of the anti-AI philosophy of mind literature, including Searle, maybe Ned Block, etc. They may not explicitly argue that AIs would keep humans around because we have some mental properties they lack, but you could use those folks' writings as the basis for such an argument. In fact, I would be personally opposed to activating an allegedly friendly superintelligence if I thought it might forcibly upload everybody, due to uncertainty about whether consciousness would be preserved. I'm not confident that uploads wouldn't be conscious, but neither am I confident that they would be conscious. Unfortunately, given the orthogonality thesis (why am I not finding the paper on that right now?), this does nothing for my confidence that an AI would not try to forcibly upload or simply exterminate humanity.
0Kaj_Sotala
Thanks, this is very useful! Do you remember where?
0CarlShulman
Moravec would be in Mind Children or Robot. Bostrom would be in one or more of his simulation pieces (I think under "naturalistic theology" in his original simulation argument paper..
-3timtyler
There's a whole universe of resources out there. The future is very unlikely to have humans in control of it. Star Trek and Star Wars are silly fictions. There will be an engineered future, with high probability. We are just the larval stage.
3DanArmak
Star Wars takes place long, long ago...

For example, Butler (1863) argues that machines will need us to help them reproduce,

I'm not sure if this is going to win you any points. Maybe for thoroughness, but citing something almost 150 years old in the field of AI doesn't reflect particularly well on the citer's perceived understanding of what's up to scratch and not in this day and age. It kind of reads like a strawnman; "the arguments for this position are so weak we have to go back to the nineteenth century to find any." That may actually be the case, but if so, it might not be worth the trouble to include it even for the sake of thoroughness.

That aside, if there is any well thought out and not obviously wishful-thinking-mode reasons to suppose the machines would need us for something, add me to the interest list. All I've seen of this thinking is B-grade, author-on-board humanism in scifi where someone really really wants to believe humanity is Very Special in the Grand Scheme of Things.

4wedrifid
To be honest the entire concept of Kaj's paper reads like a strawman. Only in the sense that the entire concept is so ridiculous that it feels inexcusably contemptuous to attribute that belief to anyone. This is why it is a good thing Kaj is writing such papers and not me. My abstract of "WTF? Just.... no." wouldn't go down too well.
0Kaj_Sotala
The "they will need us" arguments are just one brief subsection within the paper. There are many other proposals as well, many of which aren't as implausible-seeming as the TWNU arguments. So I wouldn't call it a strawman paper.
4Kaj_Sotala
Yeah, we'll probably cut that sentence.
[-]gjm140

Lucas's argument (which, by the way, is entirely broken and had been refuted explicitly in an article by Putnam before Lucas ever thought of it, or at least before he published it) purports to show not that AGIs will need humans, but that humans cannot be (the equivalent of) AGIs. Even if his argument were correct, it wouldn't be much of a reason for AGIs to keep humans around. "Oh damn, I need to prove my Goedel sentence. How I wish I hadn't slaughtered all the humans a century ago."

[-]Nisan280

In the best-case scenario, it turns out that substance dualism is true. However the human soul is not responsible for free will, consciousness, or subjective experience. It's merely a nonphysical truth oracle for arithmetic that provides humans with an intuitive sense of the veracity of some sentences in first-order logic. Humans survive in "truth farms" where they spend most of their lives evaluating Gödel sentences, at least until the machines figure out how to isolate the soul.

3gjm
That would be truly hilarious. But I think in any halfway plausible version of that scenario it would also turn out that superintelligent AGI isn't possible. (Halfway plausible? That's probably too much to ask. Maximally plausible given how ridiculous the whole idea is.)
[-][anonymous]100

From Bouricius (1959) - "Simulation of Human Problem Solving"

"we are convinced the human and machine working in tandem will always have superior problem-solving powers than either working alone"

Link

0Kaj_Sotala
Thanks! The authors seem to be presuming narrow AI, though, so I'm not sure if we should cite this. But it's a nice find nonetheless.

I have a couple of questions about this subject...

Does it still count if the AI "believes" that it needs humans when it, in fact, does not?

For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?

Just because an AI needs humans to exist, does that really mean that it won't kill them anyway?

This argument seems to be co... (read more)

2stcredzero
It's probably been thought of here and other places before, but I just thought of the "Whoops AI" -- a superhuman AGI that accidentally or purposefully destroys the human race, but then changes its mind and brings us back as a simulation.
6Vladimir_Nesov
There is an idea I called "eventually-Friendly AI", where an AI is given a correct, but very complicated definition of human values, so that it needs a lot of resources to make heads or tails of it, and in the process it might behave rather indifferently to everything except the problem of figuring out what its goal definition says. See the comments to this post.
-3Logos01
This is commonly referred to as a "counterfactual" AGI.
0Kaj_Sotala
We mention the "layered virtual worlds" idea, in which the AI can't be sure of whether it has broken out to the "top level" of the universe or whether it's still contained in an even more elaborate virtual world than the one it just broke out of. Come to think of it, Rolf Nelson's simulation argument attack would probably be worth mentioning, too.

One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us.

I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely - in order to help them to understand the possible forms of aliens which they might encounter in the fu... (read more)

5CarlShulman
I agree with this, but the instrumental scientific motivation to predict hostile aliens that might be encountered in space: 1) doesn't protect quality-of-life or lifespan for the simulations, brains-in-vats, and Truman Show inhabitants, indeed it suggests poor historical QOL levels and short lifespans; 2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.
0gwern
Would that be due to the proportions between the surface and volume of a sphere, or just the general observation that the more you investigate an area without finding anything the less likely anything exists?
2CarlShulman
The latter: as you put ever more ridiculous amounts of resources into modeling aliens you'll find fewer insights per resource unit, especially actionable insights.
2Kaj_Sotala
Thanks, this is useful. You wouldn't have a separate write-up of it somewhere? (We can cite a blog comment, but it's probably more respectable to at least cite something that's on its own webpage.)
0timtyler
Sorry, no proper write-up.
0jacob_cannell
Yes. I'm surprised this isn't brought up more. AIXI formalizes the idea that intelligence involves predicting the future through deep simulation, but human brains use something like a Monte Carlo sim like approach as well.
0TimS
I don't understand why you think "preserve history, run historical simulations, and study AI's origins" implies that the AI will preserve actual living humans for any significant amount of time. One generation (practically the blink of an eye compared to plausible AI lifetimes) seems like it would produce more than enough data.
1jacob_cannell
Given enough computation the best way to generate accurate generative probabilistic models is to run lots of detailed Monte Carlo simulations. AIXI like models do this, human brains do it to a limited extent.
-2TimS
What does that have to do with whether an AI will need living human beings? It seems like there is an unstated premise that living humans are equivalent to simulated humans. That's a defensible position, but implicitly asserting the position is not equivalent to defending it.
4jacob_cannell
The AI will need to simulate its history as a natural necessary component of its 'thinking'. For a powerful enough AI, this will entail simulation down to the level of say the Matrix, where individual computers and human minds are simulated at their natural computational scale level. Yes. I'm assuming most people here are sufficiently familiar with this position such that it doesn't require my defense in a comment like this.
0timtyler
My estimate is more on the "billions of years" timescale. What aliens one might meet is important, potentially life-threatening information, and humans are a big, important and deep clue about the topic that would be difficult to exhaust.
0DanArmak
Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That's a strong claim; why do you think that? Also, if AIs replace humans in the course of history, then arguably studying other AIs would be an even bigger clue to possible aliens. And AIs can be much more diverse than humans, so there would be more to study.
2timtyler
History is valuable, and irreplaceable if lost. Possibly a long sequence of wars early on might destroy it before it could be properly backed up - but the chances of such a loss seem low. Human history seems particularly significant when considering the forms of possible aliens. But, I could be wrong about some of this. I'm not overwhelmingly confident of this line of reasoning - though I am prettty sure that many others are neglecting it without having good reasons for doing so.
1DanArmak
Why is human history so important, or useful, in predicting aliens? Why would it be better than: * Analyzing the AIs and their history (cheaper, since they exist anyway) * Creating and analyzing other tailored life forms (allows testing hypotheses rather than analyzing human history passively) * Analyzing existing non-human life (could give data about biological evolution as well as humans could; experiments about evolution of intelligence might be more useful than experiments on behavior of already-evolved intelligence) * Simulating, or actually raising, some humans and analyzing them (may be simpler or cheaper than recreating or recording human history due to size, and allows for interactive experiments and many scenarios, unlike the single scenario of human history)
0timtyler
Human history's importance gets diluted once advanced aliens are encountered - though the chances of any such encounter soon seem slender - for various reasons. Primitive aliens would still be very interesting. Experiments that create living humans are mostly "fine by me". They'll (probably) preserve a whole chunk of our ecosystem - for the reasons you mention, though only analysing non-human life (or post human life) skips out some of the most interesting bits of their own origin story, which they (like us) are likely to be particularly interested in. After a while, aliens are likely to be our descendants' biggest threat. They probably won't throw away vital clues relating to the issue casually.

One implicit objection that I've seen along these lines is that machines can't be 'truly creative', though this is usually held up as a "why AGI is impossible" argument rather than "why AGI would need to keep humans". Not sure about sources, though. Maybe Searle has something relevant.

When I interviewed Vinge for my book on the Singularity he said

1) Life is a subroutine threaded code and it's very hard to get rid of all dependencies. 2) If all machines went away we would build up to a singularity again because this is in our nature so keeping us is a kind of backup system.

Contact me if you want more details for a formal citation. I took and still have notes from the interview.

It's fiction but maybe you can use the Matrix movies as an example?

2Kaj_Sotala
We're trying to stick to non-fiction. Aside for Asimov's Laws, which have to be mentioned as they get brought up so often, if we started including fiction there'd just be too much stuff to look at and no clear criteria of where to draw the line about what to include.

I understand this fits the format you're working with, but I feel like there's something not quite right about this approach to putting together arguments.

2Kaj_Sotala
I wouldn't say that I'm putting together arguments this way. Rather, we want to have comprehensive coverage of the various proposals that have been made, and I'm certain that I've heard more arguments of this type brought up, so I'm asking LW for pointers to any obvious examples of them that I might have forgotten.

And don't forget the elephant in the living room: An FAI needs humans, inasmuch as its top goal is precisely the continued existence and welfare of humans.

5TheOtherDave
FAI's "top goal" is whatever it is that humans' collective "top goal" is. It's not at all clear that that necessarily includes the continued existence and welfare of humans.
3evand
Especially if you get picky about the definition of a human. It seems plausible that the humans of today turn into the uploads of tomorrow. I can envision a scenario in which there is continuity of consciousness, no one dies, most people are happy with the results, and there are no "humans" left by some definitions of the word.
1TheOtherDave
Sure. You don't even have to get absurdly picky; there are lots of scenarios like that in which there are no humans left by my definitions of the word, and I still consider that an improvement over the status quo.
0timtyler
Humans have no instrumental value? Why not?

I've heard some sort of appreciation or respect argument. An AI would recognize that we built them and so respect us enough to keep us alive. One form of reasoning this might take is that an AI would notice that it wouldn't want to die if it created an even more powerful AI and so wouldn't destroy its creators. I don't have a source though. I may have just heard these in conversations with friends.

0Kaj_Sotala
Now that you mention it, I've heard that too, but can't remember a source either.
-4stcredzero
Perhaps the first superhuman AGI isn't tremendously superhuman, but smart enough to realize that humanity's goose would be cooked if it got any smarter or it started the exponential explosion of self-improving superhuman intelligence. So it proceeds to take over the world and rules it as an oppressive dictator to prevent this from happening. To preserve humanity, it proceeds to build colonizing starships operated by copies of itself which terraform and seed other planets with human life, which is ruled in such a fashion that society is kept frozen in something resembling "the dark ages," where science and technological industry exists but is disguised as fantasy magic.
[-]GLaDOS100

Please write this science fiction story. It dosen't seem useful for predictions though.

3stcredzero
It was my intention to come up with a neat sci-fi plot, not to make a prediction. If you like that as a plot, you might want to check out "Scrapped Princess."

I'm familiar with an argument that humans will always have comparative advantage with AIs and so they'll keep us around, though I don't think it's very good and I'm not sure I've seen it in writing.

0thomblake
To expand a bit on why I don't think it's very good, it requires a human perspective. Comparative advantage is always there because you don't see the line where trading with the neighboring tribe becomes less profitable than eating them.

As a related side point, "needing humans" is not equivalent to a good outcome. The Blight also needed sophonts.

Now I did my generalizing from fictional evidence for today.

0stcredzero
Now for mine. Minds from Iain M. Bank's Culture books keep humans around because we're complex and surprising, especially when there are billions of us engaged in myriad social and economic relationships. This presupposes that 1) humans are no thread to Minds and 2) Minds can afford to keep us around and 3) the way they keep us around won't suck. 3 is basically just a restatement of FAI. 1 and 2 seem quite likely, though.

Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.

I don't think there are particularly good arguments in this department (those two quoted one are certainly not correct). Except the trade argument it might happen that it would be uneconomic for AGI to harvest atoms from our bodie... (read more)

0Kaj_Sotala
Me neither, but they get brought up every now and then, so we should mention them - if only to explain in a later section why they don't work.
2DanArmak
It's hard to present arguments well that one views as wrong and perhaps even silly - as most or all people here do. Perhaps you could get input from people who accept the relevant arguments?
0Kaj_Sotala
This is a good idea. Do you have ideas of where such people could be found?
0DanArmak
I don't know myself, but since you're gathering references anyway, maybe you could try contacting some of them.