[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.
Good article. Thx for posting. I agree with much of it, but ...
Goertzel writes:
Is this really different from the Scary Idea?
I've always thought of this as part of the Scary Idea, in fact, the reason the Scary Idea is scary - scarier than nuclear weapons. Because when mankind reaches the abyss, and looks with dismay at the prospect that lies ahead, we all know that there will be at least one idiot among us why doesn't draw back from the abyss, but instead continues forward down the slippery slope.
At the nuclear abyss, that idiot will probably kill a few hundred million of us. No big deal. But at the uFAI abyss, we may have ourselves a serious problem.
It seems different to me.
If I believe "X is incredibly useful but someone might use it to destroy the world," I can conclude that I should build X and take care to police the sorts of people who get to use it. But if I believe "X is incredibly useful but its very existence might spontaneously destroy the world" then that strategy won't work... it doesn't matter who uses it. Maybe there's another way, or maybe I just shouldn't build X, but regardless of the solution it's a different problem.
It's like the difference between believing th... (read more)