Wanted: "The AIs will need humans" arguments

7 Post author: Kaj_Sotala 14 June 2012 11:01AM

As Luke mentioned, I am in the process of writing "Responses to Catastrophic AGI Risk": A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.

One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty:

Supporting a mutually beneficial legal or economic arrangement is the view that AGIs will need humans. For example, Butler (1863) argues that machines will need us to help them reproduce, and Lucas (1961) suggests that machines could never show Gödelian sentences true, though humans can see them as true.

But I'm certain that I've heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While "academically reputable" sources (papers, books) are preferred, blog posts and websites are fine as well.

Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.

Comments (83)

Comment author: hegemonicon 14 June 2012 07:38:04PM 6 points [-]

From Bouricius (1959) - "Simulation of Human Problem Solving"

"we are convinced the human and machine working in tandem will always have superior problem-solving powers than either working alone"

Link

Comment author: Kaj_Sotala 18 June 2012 11:01:11AM 0 points [-]

Thanks! The authors seem to be presuming narrow AI, though, so I'm not sure if we should cite this. But it's a nice find nonetheless.

Comment author: CarlShulman 14 June 2012 07:46:47PM *  15 points [-]
  • Scientific specimens, to better understand what alien intelligent life might be like; dissecting and scanning the brains and bodies of existing humans seems like it would preserve more information, but further scientific studies could involve the creation of new humans or copies of old ones
  • Trading goods, in case of future encounters with alien intelligences that are interested in or care about humans; one could try to fake this by dissecting the humans and then only restoring them if aliens are encountered, but perfect lies might be suspicious (e.g. if one keeps detailed records it's hard to construct a seamless lie, and destroying records is suspicious)
  • Concern that they are in a stage-managed virtual environment (they are in a computer simulation, or alien probes exist in the solar system and have been concealing themselves as well as observing), and that preserving the humans brings at least a tiny probability of being rewarded enough to make it worthwhile; Vinge talks about the 'meta-golden-rule,' and Moravec and Bostrom touch on related points

Storing much of humanity (or at least detailed scans and blueprints) seems cheap relative to the resources of the Solar System, but it could be in conflict with things like eliminating threats from humans as quickly as possible, or avoiding other modest pressures in the opposite direction (e.g. concerns about the motives of alien trading partners or stage-managers could also favor eliminating humanity, depending on the estimated distribution of alien motives).

I would expect human DNA, history, and brain-scans to be stored, but would be less confident about experiments with living humans or conscious simulations thereof. The quality-of-life for experimental subjects could be OK, or not so OK, but I would definitely expect the resources available to live long lifespans, sustain relatively large populations, or produce lots of welfare would be far scarcer than in a scenario of human control.

The Butler citation is silly and shouldn't be bothered with. There are far more recent claims that the human brain can do hypercomputation, perhaps due to an immaterial mind or mystery physics that would be hard to duplicate outside of humans for a while, or even forever. Penrose is more recent. Selmer Bringsjord has recently argued that humans can do hypercomputation, so AI will fail (as well that P=NP, he has a whole cluster of out-of-the-computationalist-mainstream ideas). And there are many others arguing for mystical computational powers in human brains.

Comment author: Kaj_Sotala 18 June 2012 11:07:37AM *  0 points [-]

Thanks, this is very useful!

and Moravec and Bostrom touch on related points

Do you remember where?

Comment author: CarlShulman 18 June 2012 03:38:56PM *  0 points [-]

Moravec would be in Mind Children or Robot. Bostrom would be in one or more of his simulation pieces (I think under "naturalistic theology" in his original simulation argument paper..

Comment author: timtyler 16 June 2012 10:28:20AM *  -1 points [-]

I would expect human DNA, history, and brain-scans to be stored, but would be less confident about experiments with living humans or conscious simulations thereof. The quality-of-life for experimental subjects could be OK, or not so OK, but I would definitely expect the resources available to live long lifespans, sustain relatively large populations, or produce lots of welfare would be far scarcer than in a scenario of human control.

There's a whole universe of resources out there. The future is very unlikely to have humans in control of it. Star Trek and Star Wars are silly fictions. There will be an engineered future, with high probability. We are just the larval stage.

Comment author: DanArmak 17 June 2012 08:01:10PM 1 point [-]

he future is very unlikely to have humans in control of it. Star Trek and Star Wars are silly fictions.

Star Wars takes place long, long ago...

Comment author: ChrisHallquist 19 June 2012 09:30:39AM *  0 points [-]

Seconding Penrose. Depending on how broadly you want to cast your net, you could include a sampling of the anti-AI philosophy of mind literature, including Searle, maybe Ned Block, etc. They may not explicitly argue that AIs would keep humans around because we have some mental properties they lack, but you could use those folks' writings as the basis for such an argument.

In fact, I would be personally opposed to activating an allegedly friendly superintelligence if I thought it might forcibly upload everybody, due to uncertainty about whether consciousness would be preserved. I'm not confident that uploads wouldn't be conscious, but neither am I confident that they would be conscious.

Unfortunately, given the orthogonality thesis (why am I not finding the paper on that right now?), this does nothing for my confidence that an AI would not try to forcibly upload or simply exterminate humanity.

Comment author: timtyler 15 June 2012 10:31:41PM *  4 points [-]

One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us.

I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely - in order to help them to understand the possible forms of aliens which they might encounter in the future. So, humans are likely to be preserved because superintelligences need us instrumentally - as objects of study.

This applies to (e.g.) gold atom maximisers, with no shred of human values. I don't claim it for all superintelligences, though - or even 99% of those likely to be built.

Comment author: CarlShulman 19 June 2012 06:44:36AM *  3 points [-]

I agree with this, but the instrumental scientific motivation to predict hostile aliens that might be encountered in space:

1) doesn't protect quality-of-life or lifespan for the simulations, brains-in-vats, and Truman Show inhabitants, indeed it suggests poor historical QOL levels and short lifespans;

2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.

Comment author: gwern 19 June 2012 03:05:49PM 0 points [-]

2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.

Would that be due to the proportions between the surface and volume of a sphere, or just the general observation that the more you investigate an area without finding anything the less likely anything exists?

Comment author: CarlShulman 19 June 2012 09:51:28PM 1 point [-]

The latter: as you put ever more ridiculous amounts of resources into modeling aliens you'll find fewer insights per resource unit, especially actionable insights.

Comment author: Kaj_Sotala 18 June 2012 10:55:26AM 1 point [-]

Thanks, this is useful. You wouldn't have a separate write-up of it somewhere? (We can cite a blog comment, but it's probably more respectable to at least cite something that's on its own webpage.)

Comment author: timtyler 18 June 2012 11:05:26PM 0 points [-]

Sorry, no proper write-up.

Comment author: jacob_cannell 17 June 2012 07:22:30PM 0 points [-]

Yes. I'm surprised this isn't brought up more. AIXI formalizes the idea that intelligence involves predicting the future through deep simulation, but human brains use something like a Monte Carlo sim like approach as well.

Comment author: TimS 16 June 2012 01:48:47AM 0 points [-]

I don't understand why you think "preserve history, run historical simulations, and study AI's origins" implies that the AI will preserve actual living humans for any significant amount of time. One generation (practically the blink of an eye compared to plausible AI lifetimes) seems like it would produce more than enough data.

Comment author: jacob_cannell 17 June 2012 07:16:35PM *  1 point [-]

Given enough computation the best way to generate accurate generative probabilistic models is to run lots of detailed Monte Carlo simulations. AIXI like models do this, human brains do it to a limited extent.

Comment author: TimS 17 June 2012 07:20:19PM -1 points [-]

What does that have to do with whether an AI will need living human beings? It seems like there is an unstated premise that living humans are equivalent to simulated humans. That's a defensible position, but implicitly asserting the position is not equivalent to defending it.

Comment author: jacob_cannell 17 June 2012 10:45:48PM 2 points [-]

What does that have to do with whether an AI will need living human beings?

The AI will need to simulate its history as a natural necessary component of its 'thinking'. For a powerful enough AI, this will entail simulation down to the level of say the Matrix, where individual computers and human minds are simulated at their natural computational scale level.

It seems like there is an unstated premise that living humans are equivalent to simulated humans. That's a defensible position, but implicitly asserting the position is not equivalent to defending it.

Yes. I'm assuming most people here are sufficiently familiar with this position such that it doesn't require my defense in a comment like this.

Comment author: timtyler 16 June 2012 10:01:15AM *  0 points [-]

My estimate is more on the "billions of years" timescale. What aliens one might meet is important, potentially life-threatening information, and humans are a big, important and deep clue about the topic that would be difficult to exhaust.

Comment author: DanArmak 17 June 2012 07:57:08PM 0 points [-]

Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That's a strong claim; why do you think that?

Also, if AIs replace humans in the course of history, then arguably studying other AIs would be an even bigger clue to possible aliens. And AIs can be much more diverse than humans, so there would be more to study.

Comment author: timtyler 17 June 2012 08:49:48PM *  1 point [-]

Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That's a strong claim; why do you think that?

History is valuable, and irreplaceable if lost. Possibly a long sequence of wars early on might destroy it before it could be properly backed up - but the chances of such a loss seem low. Human history seems particularly significant when considering the forms of possible aliens. But, I could be wrong about some of this. I'm not overwhelmingly confident of this line of reasoning - though I am prettty sure that many others are neglecting it without having good reasons for doing so.

Comment author: DanArmak 17 June 2012 09:04:05PM 1 point [-]

Why is human history so important, or useful, in predicting aliens? Why would it be better than:

  • Analyzing the AIs and their history (cheaper, since they exist anyway)
  • Creating and analyzing other tailored life forms (allows testing hypotheses rather than analyzing human history passively)
  • Analyzing existing non-human life (could give data about biological evolution as well as humans could; experiments about evolution of intelligence might be more useful than experiments on behavior of already-evolved intelligence)
  • Simulating, or actually raising, some humans and analyzing them (may be simpler or cheaper than recreating or recording human history due to size, and allows for interactive experiments and many scenarios, unlike the single scenario of human history)
Comment author: timtyler 17 June 2012 10:17:50PM *  0 points [-]

Human history's importance gets diluted once advanced aliens are encountered - though the chances of any such encounter soon seem slender - for various reasons. Primitive aliens would still be very interesting.

Experiments that create living humans are mostly "fine by me".

They'll (probably) preserve a whole chunk of our ecosystem - for the reasons you mention, though only analysing non-human life (or post human life) skips out some of the most interesting bits of their own origin story, which they (like us) are likely to be particularly interested in.

After a while, aliens are likely to be our descendants' biggest threat. They probably won't throw away vital clues relating to the issue casually.

Comment author: Untermensch 14 June 2012 02:35:23PM 4 points [-]

I have a couple of questions about this subject...

Does it still count if the AI "believes" that it needs humans when it, in fact, does not?

For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?

Just because an AI needs humans to exist, does that really mean that it won't kill them anyway?

This argument seems to be contingent on the AI wishing to live. Wishing to live is not a function of all inteligence. If an AI was smarter than anything else out there but depended on lesser, and provenly irrational beings for its continued existence this does not mean that it would want to "live" that way forever. It could either want to gain independance, or cease to exist, neither of which are necessarily healthy for its "supporting units".

Or, it could not care either way whether it lives or dies, as stopping all work on the planet is more important for slowing the entropic death of the universe.

It may be the case that an AI does not want to live reliant on "lesser beings" and sees the only way of ensuring its permanent destruction as the destruction of any being capable of creating it again, or the future possibilty of such life evolving. It may decide to blow up the universe to make extra sure of that.

Come to think of it a suicidal AI could be a pretty big problem...

Comment author: stcredzero 14 June 2012 07:18:28PM 2 points [-]

Come to think of it a suicidal AI could be a pretty big problem...

It's probably been thought of here and other places before, but I just thought of the "Whoops AI" -- a superhuman AGI that accidentally or purposefully destroys the human race, but then changes its mind and brings us back as a simulation.

Comment author: Vladimir_Nesov 14 June 2012 11:52:24PM 4 points [-]

There is an idea I called "eventually-Friendly AI", where an AI is given a correct, but very complicated definition of human values, so that it needs a lot of resources to make heads or tails of it, and in the process it might behave rather indifferently to everything except the problem of figuring out what its goal definition says. See the comments to this post.

Comment author: Logos01 14 June 2012 10:48:15PM -2 points [-]

but then changes its mind and brings us back as a simulation."

This is commonly referred to as a "counterfactual" AGI.

Comment author: Kaj_Sotala 18 June 2012 11:00:05AM 0 points [-]

For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?

We mention the "layered virtual worlds" idea, in which the AI can't be sure of whether it has broken out to the "top level" of the universe or whether it's still contained in an even more elaborate virtual world than the one it just broke out of. Come to think of it, Rolf Nelson's simulation argument attack would probably be worth mentioning, too.

Comment author: gjm 14 June 2012 12:32:21PM 10 points [-]

Lucas's argument (which, by the way, is entirely broken and had been refuted explicitly in an article by Putnam before Lucas ever thought of it, or at least before he published it) purports to show not that AGIs will need humans, but that humans cannot be (the equivalent of) AGIs. Even if his argument were correct, it wouldn't be much of a reason for AGIs to keep humans around. "Oh damn, I need to prove my Goedel sentence. How I wish I hadn't slaughtered all the humans a century ago."

Comment author: Nisan 14 June 2012 07:41:12PM 16 points [-]

In the best-case scenario, it turns out that substance dualism is true. However the human soul is not responsible for free will, consciousness, or subjective experience. It's merely a nonphysical truth oracle for arithmetic that provides humans with an intuitive sense of the veracity of some sentences in first-order logic. Humans survive in "truth farms" where they spend most of their lives evaluating Gödel sentences, at least until the machines figure out how to isolate the soul.

Comment author: gjm 14 June 2012 10:14:39PM 2 points [-]

That would be truly hilarious. But I think in any halfway plausible version of that scenario it would also turn out that superintelligent AGI isn't possible.

(Halfway plausible? That's probably too much to ask. Maximally plausible given how ridiculous the whole idea is.)

Comment author: Gastogh 14 June 2012 11:30:41AM 12 points [-]

For example, Butler (1863) argues that machines will need us to help them reproduce,

I'm not sure if this is going to win you any points. Maybe for thoroughness, but citing something almost 150 years old in the field of AI doesn't reflect particularly well on the citer's perceived understanding of what's up to scratch and not in this day and age. It kind of reads like a strawnman; "the arguments for this position are so weak we have to go back to the nineteenth century to find any." That may actually be the case, but if so, it might not be worth the trouble to include it even for the sake of thoroughness.

That aside, if there is any well thought out and not obviously wishful-thinking-mode reasons to suppose the machines would need us for something, add me to the interest list. All I've seen of this thinking is B-grade, author-on-board humanism in scifi where someone really really wants to believe humanity is Very Special in the Grand Scheme of Things.

Comment author: Kaj_Sotala 14 June 2012 12:40:25PM 4 points [-]

Yeah, we'll probably cut that sentence.

Comment author: wedrifid 14 June 2012 10:28:25PM *  3 points [-]

It kind of reads like a strawnman

To be honest the entire concept of Kaj's paper reads like a strawman. Only in the sense that the entire concept is so ridiculous that it feels inexcusably contemptuous to attribute that belief to anyone. This is why it is a good thing Kaj is writing such papers and not me. My abstract of "WTF? Just.... no." wouldn't go down too well.

Comment author: Kaj_Sotala 18 June 2012 11:03:00AM 0 points [-]

The "they will need us" arguments are just one brief subsection within the paper. There are many other proposals as well, many of which aren't as implausible-seeming as the TWNU arguments. So I wouldn't call it a strawman paper.

Comment author: orthonormal 16 June 2012 04:08:23PM *  2 points [-]

One implicit objection that I've seen along these lines is that machines can't be 'truly creative', though this is usually held up as a "why AGI is impossible" argument rather than "why AGI would need to keep humans". Not sure about sources, though. Maybe Searle has something relevant.

Comment author: Dr_Manhattan 14 June 2012 01:34:59PM 1 point [-]

As a related side point, "needing humans" is not equivalent to a good outcome. The Blight also needed sophonts.

Now I did my generalizing from fictional evidence for today.

Comment author: stcredzero 14 June 2012 07:14:17PM 1 point [-]

Now I did my generalizing from fictional evidence for today.

Now for mine. Minds from Iain M. Bank's Culture books keep humans around because we're complex and surprising, especially when there are billions of us engaged in myriad social and economic relationships.

This presupposes that 1) humans are no thread to Minds and 2) Minds can afford to keep us around and 3) the way they keep us around won't suck. 3 is basically just a restatement of FAI. 1 and 2 seem quite likely, though.

Comment author: James_Miller 12 July 2012 05:57:47PM 1 point [-]

When I interviewed Vinge for my book on the Singularity he said

1) Life is a subroutine threaded code and it's very hard to get rid of all dependencies. 2) If all machines went away we would build up to a singularity again because this is in our nature so keeping us is a kind of backup system.

Contact me if you want more details for a formal citation. I took and still have notes from the interview.

Comment author: ChristianKl 16 June 2012 04:22:13PM 1 point [-]

It's fiction but maybe you can use the Matrix movies as an example?

Comment author: Kaj_Sotala 18 June 2012 10:47:54AM 1 point [-]

We're trying to stick to non-fiction. Aside for Asimov's Laws, which have to be mentioned as they get brought up so often, if we started including fiction there'd just be too much stuff to look at and no clear criteria of where to draw the line about what to include.

Comment author: Desrtopa 15 June 2012 11:37:17PM 1 point [-]

I understand this fits the format you're working with, but I feel like there's something not quite right about this approach to putting together arguments.

Comment author: Kaj_Sotala 18 June 2012 10:44:20AM 1 point [-]

I wouldn't say that I'm putting together arguments this way. Rather, we want to have comprehensive coverage of the various proposals that have been made, and I'm certain that I've heard more arguments of this type brought up, so I'm asking LW for pointers to any obvious examples of them that I might have forgotten.

Comment author: JoshuaFox 15 June 2012 03:14:12PM 1 point [-]

And don't forget the elephant in the living room: An FAI needs humans, inasmuch as its top goal is precisely the continued existence and welfare of humans.

Comment author: TheOtherDave 15 June 2012 03:22:42PM 3 points [-]

FAI's "top goal" is whatever it is that humans' collective "top goal" is.
It's not at all clear that that necessarily includes the continued existence and welfare of humans.

Comment author: evand 15 June 2012 06:43:41PM 2 points [-]

Especially if you get picky about the definition of a human. It seems plausible that the humans of today turn into the uploads of tomorrow. I can envision a scenario in which there is continuity of consciousness, no one dies, most people are happy with the results, and there are no "humans" left by some definitions of the word.

Comment author: TheOtherDave 15 June 2012 07:38:32PM 1 point [-]

Sure. You don't even have to get absurdly picky; there are lots of scenarios like that in which there are no humans left by my definitions of the word, and I still consider that an improvement over the status quo.

Comment author: timtyler 16 June 2012 10:23:55AM 0 points [-]

Humans have no instrumental value? Why not?

Comment author: amcknight 14 June 2012 08:52:53PM 1 point [-]

I've heard some sort of appreciation or respect argument. An AI would recognize that we built them and so respect us enough to keep us alive. One form of reasoning this might take is that an AI would notice that it wouldn't want to die if it created an even more powerful AI and so wouldn't destroy its creators. I don't have a source though. I may have just heard these in conversations with friends.

Comment author: Kaj_Sotala 18 June 2012 10:53:26AM 0 points [-]

Now that you mention it, I've heard that too, but can't remember a source either.

Comment author: stcredzero 14 June 2012 10:32:36PM -2 points [-]

Perhaps the first superhuman AGI isn't tremendously superhuman, but smart enough to realize that humanity's goose would be cooked if it got any smarter or it started the exponential explosion of self-improving superhuman intelligence. So it proceeds to take over the world and rules it as an oppressive dictator to prevent this from happening.

To preserve humanity, it proceeds to build colonizing starships operated by copies of itself which terraform and seed other planets with human life, which is ruled in such a fashion that society is kept frozen in something resembling "the dark ages," where science and technological industry exists but is disguised as fantasy magic.

Comment author: GLaDOS 16 June 2012 08:09:03PM *  6 points [-]

Please write this science fiction story. It dosen't seem useful for predictions though.

Comment author: stcredzero 17 June 2012 10:00:27PM 2 points [-]

It was my intention to come up with a neat sci-fi plot, not to make a prediction. If you like that as a plot, you might want to check out "Scrapped Princess."

Comment author: thomblake 14 June 2012 07:27:27PM 1 point [-]

I'm familiar with an argument that humans will always have comparative advantage with AIs and so they'll keep us around, though I don't think it's very good and I'm not sure I've seen it in writing.

Comment author: thomblake 15 June 2012 02:09:08PM 0 points [-]

To expand a bit on why I don't think it's very good, it requires a human perspective. Comparative advantage is always there because you don't see the line where trading with the neighboring tribe becomes less profitable than eating them.

Comment author: mapnoterritory 14 June 2012 12:36:49PM *  0 points [-]

Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.

I don't think there are particularly good arguments in this department (those two quoted one are certainly not correct). Except the trade argument it might happen that it would be uneconomic for AGI to harvest atoms from our bodies.

As for "essentially irreplaceable" - in a very particular sense the exact arrangement of particles each second of every human being is "unique" and "essentially irreplaceable" (bar now quantum mechanics). An extreme "archivist/rare art collector/Jain monk" AI might want to keep therefore these collections (or some of their snapshots), but I don't see this to be too compelling. I am sure we could win a lot of sympathy if AGI could be shown to automatically entail some sort of ultimate compassion, but I think it is more likely we have to make it so (hence the FAI effort).

If I want to be around to see the last moments of Sun, I will feel a sting of guilt that the Universe is slightly less efficient because it is running me, rather than using those resources for some better, deeper experiencing, more seeing observer.

Comment author: Kaj_Sotala 14 June 2012 12:40:01PM 0 points [-]

I don't think there are particularly good arguments in this department

Me neither, but they get brought up every now and then, so we should mention them - if only to explain in a later section why they don't work.

Comment author: DanArmak 17 June 2012 08:06:44PM 1 point [-]

It's hard to present arguments well that one views as wrong and perhaps even silly - as most or all people here do. Perhaps you could get input from people who accept the relevant arguments?

Comment author: Kaj_Sotala 18 June 2012 10:50:00AM *  0 points [-]

This is a good idea. Do you have ideas of where such people could be found?

Comment author: DanArmak 18 June 2012 02:17:03PM 0 points [-]

I don't know myself, but since you're gathering references anyway, maybe you could try contacting some of them.