[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.
You mean "somehow guaranteed "? No, I don't believe that a self-improving AI will ever fall into this category. It might decide to believe it -- which would be very dangerous for us -- but, no, I don't believe that it is likely to truly find such a guarantee. Further, given the VERY minimal cost (if any) of cooperating with a cooperating entity, an AI would be human-foolish to take the stupid short-sighted shortcut of trashing us for no reason -- since it certainly is an existential risk for IT that something bigger and smarter would take exception to such a diversity-decreasing act.
MORE IMPORTANTLY - you dropped the fact that the AI already has to have one flaw (terminal goals) before this second aspect could possibly become a problem.
Fire is not an "edge" case. The probability of a building catching fire in a city any given day is VERY high. But that is irrelevant because . . . .
you ALWAYS worry about edge cases. In this case, though, if you are aware of them and plan/prepare against them -- they are AVOIDABLE edge cases (more so than the city burning down even if you have fire prevention c.f. Chicago & Mrs. O'Leary's cow).
You don't seem to understand how basic reasoning works (by LW standards). AFAICT, you are both privileging your hypothesis, and not weighing any evidence.
(Heck, you're not even stating any evidence, only relying on repeated assertion of your framing of the situation.)
You still haven't responded, for example, to my previous point about human-bacterium empathy. We don't have empathy for bacteria, in part because we see them as interchangeable and easily r... (read more)