Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That's a strong claim; why do you think that?
History is valuable, and irreplaceable if lost. Possibly a long sequence of wars early on might destroy it before it could be properly backed up - but the chances of such a loss seem low. Human history seems particularly significant when considering the forms of possible aliens. But, I could be wrong about some of this. I'm not overwhelmingly confident of this line of reasoning - though I am prettty sure that many others are neglecting it without having good reasons for doing so.
Why is human history so important, or useful, in predicting aliens? Why would it be better than:
As Luke mentioned, I am in the process of writing "Responses to Catastrophic AGI Risk": A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.
One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty:
But I'm certain that I've heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While "academically reputable" sources (papers, books) are preferred, blog posts and websites are fine as well.
Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.