1) the AGI
Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
2) zero
Right. E. coli don't offer us anything we can't do for ourselves, that we can't just whip up a batch of E. coli for on demand.
The AGI is missing out on tremendous opportunities if it bypasses positive-sum games of potentially infinite length and utility for a short-term finite gain
If I'm a god, what would I need a human for? If I need humans, I can just make some. Better still, I could replace them with something more efficient that doesn't complain or rebel.
The fundamental flaw in your reasoning here is that you keep trying to construct paths through probability space that could support your hypothesis, but ONLY if you had presented some evidence for singling out that hypothesis in the first place!
It's like you're a murder investigator opening up the phonebook to a random place and saying, "well, we haven't ruled out the possibility that this guy did it", and when people quite reasonably point out that there is no connection between that random guy and the murder, you reply, "yeah, but I just called this guy, and he has no alibi." (That is, you're ignoring the fact that a huge number of people in that phonebook will also have no alibi, so your "evidence" isn't actually increasing the expected probability that that guy did it.)
And that's why you're getting so many downvotes: in LW terms, you are failing basic reasoning.
But that is not a shameful thing: any normal human being fails basic reasoning, by default, in exactly the same way. Our brains simply aren't built to do reasoning: they're built to argue, by finding the most persuasive evidence that supports our pre-existing beliefs and hypotheses, rather than trying to find out what is true.
When I first got here, I argued for some of my pet hypotheses in the exact same way, although I was righteously certain that I was not doing such a thing. It took a long time before I really "got" Bayesian reasoning sufficiently to understand what I was doing wrong, and before that, I couldn't have said here what you were doing wrong either.
Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
A sufficiently clever AI should understand Comparative Advantage
[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.