[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.
Absolutely. Because dogs cooperate with us and we with them and the other species don't.
And immediately the human prejudice comes out. We have terrible behavior when we're on the top of the pile and expects others to have it as well. It's almost exactly the same as when people complain bitterly when they're oppressed and then, when they are on top, they oppress others even worse.
What is wrong with the human-canine analogy (which I thought I did more than imply) is the baggage that you are bringing to that relationship. Both parties benefit from the relationship. The dog benefits less from that relationship than you would benefit from an AGI relationship because the dog is less competent and intelligent than you are AND because the dog generally likes the treatment that it receives (whereas you would be unhappy with similar treatment).
Dogs are THE BEST analogy because they are the closest existing example to what most people are willing to concede is likely to be our relationship with a super-advanced AGI.
Oh, and dogs don't really have a clue as to what they do for us, so why do you expect me to be able to come up with what we will do for an advanced AGI? If we're willing to cooperate, there will be plenty for us to do of value that will fulfill our goals as well. We just have to avoid being too paranoid and short-sighted to see it.
The scale is all out.
earthworm --three orders of magnitude--> small lizard --three orders of magnitude--> dog --three orders of magnitude--> human --thirty orders of magnitude--> weakly superhuman AGI --several thousand orders of magnitude--> strong AI
If a recursively self-improving process stopped just far enough above us to consider us pets and did so, I would seriously question whether it was genuinely recursive, or if it was just gains from debugging and streamlining human thought process. ie, I could see a self-modifying transhuman acting in the manner you describe. But not an artificial intelligence, not unless it was very carefully designed.