Storing much of humanity (or at least detailed scans and blueprints) seems cheap relative to the resources of the Solar System, but ...
For example, Butler (1863) argues that machines will need us to help them reproduce,
I'm not sure if this is going to win you any points. Maybe for thoroughness, but citing something almost 150 years old in the field of AI doesn't reflect particularly well on the citer's perceived understanding of what's up to scratch and not in this day and age. It kind of reads like a strawnman; "the arguments for this position are so weak we have to go back to the nineteenth century to find any." That may actually be the case, but if so, it might not be worth the trouble to include it even for the sake of thoroughness.
That aside, if there is any well thought out and not obviously wishful-thinking-mode reasons to suppose the machines would need us for something, add me to the interest list. All I've seen of this thinking is B-grade, author-on-board humanism in scifi where someone really really wants to believe humanity is Very Special in the Grand Scheme of Things.
Lucas's argument (which, by the way, is entirely broken and had been refuted explicitly in an article by Putnam before Lucas ever thought of it, or at least before he published it) purports to show not that AGIs will need humans, but that humans cannot be (the equivalent of) AGIs. Even if his argument were correct, it wouldn't be much of a reason for AGIs to keep humans around. "Oh damn, I need to prove my Goedel sentence. How I wish I hadn't slaughtered all the humans a century ago."
In the best-case scenario, it turns out that substance dualism is true. However the human soul is not responsible for free will, consciousness, or subjective experience. It's merely a nonphysical truth oracle for arithmetic that provides humans with an intuitive sense of the veracity of some sentences in first-order logic. Humans survive in "truth farms" where they spend most of their lives evaluating Gödel sentences, at least until the machines figure out how to isolate the soul.
I have a couple of questions about this subject...
Does it still count if the AI "believes" that it needs humans when it, in fact, does not?
For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?
Just because an AI needs humans to exist, does that really mean that it won't kill them anyway?
This argument seems to be co...
One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us.
I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely - in order to help them to understand the possible forms of aliens which they might encounter in the fu...
One implicit objection that I've seen along these lines is that machines can't be 'truly creative', though this is usually held up as a "why AGI is impossible" argument rather than "why AGI would need to keep humans". Not sure about sources, though. Maybe Searle has something relevant.
When I interviewed Vinge for my book on the Singularity he said
1) Life is a subroutine threaded code and it's very hard to get rid of all dependencies. 2) If all machines went away we would build up to a singularity again because this is in our nature so keeping us is a kind of backup system.
Contact me if you want more details for a formal citation. I took and still have notes from the interview.
I understand this fits the format you're working with, but I feel like there's something not quite right about this approach to putting together arguments.
And don't forget the elephant in the living room: An FAI needs humans, inasmuch as its top goal is precisely the continued existence and welfare of humans.
I've heard some sort of appreciation or respect argument. An AI would recognize that we built them and so respect us enough to keep us alive. One form of reasoning this might take is that an AI would notice that it wouldn't want to die if it created an even more powerful AI and so wouldn't destroy its creators. I don't have a source though. I may have just heard these in conversations with friends.
I'm familiar with an argument that humans will always have comparative advantage with AIs and so they'll keep us around, though I don't think it's very good and I'm not sure I've seen it in writing.
As a related side point, "needing humans" is not equivalent to a good outcome. The Blight also needed sophonts.
Now I did my generalizing from fictional evidence for today.
Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.
I don't think there are particularly good arguments in this department (those two quoted one are certainly not correct). Except the trade argument it might happen that it would be uneconomic for AGI to harvest atoms from our bodie...
As Luke mentioned, I am in the process of writing "Responses to Catastrophic AGI Risk": A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.
One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty:
But I'm certain that I've heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While "academically reputable" sources (papers, books) are preferred, blog posts and websites are fine as well.
Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.