[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.
I would like to explore Ben's reasons for rejecting the premises of the argument.
He offers the possibility that intelligence might cause or imply empathy; I feel that although we see that connection when we look at all of Earth's creatures, correlation doesn't imply causation, so that (intelligence AND empathy) doesn't mean (intelligence IMPLIES empathy) - it probably means (evolution IMPLIES intelligence AND empathy) and we aren't using natural selection to build an AI.
He makes the point that human values have robustly changed many times, and will probably continue to change in coordination with AGI. Human value is not fragile on the timescales we deal with as humans; our values have indeed changed since, say, Victorian times. But that took generations - most value change will take generations, because humans are (understandably) reserved about modifying their values. The timescales that AGIs will be dealing with are, on the low end, weeks. (An AGI with access to a microchip manufacturing plant, say). I can't see a plausible AGI that enacts changes at generational speed. So, yes, our values are robust, in the sense that a mountain is robust to weather patterns - but not robust to falling into the sun.
I think he is accurate in this assessment.
Again, accurate. The path to nuclear fission was gradual over many years, but the reaction itself (the takeoff) could have irradiated a university in hours. His position appears to be that he think a hard takeoff is possible, but that we'll have warning signs and a deeper understanding of the AGI before it happens ... well, a scientist from the Manhattan Project in Japan during WWII would have a deeper understanding of the features of a nuclear explosion, but the defense against it is STILL not being in Hiroshima. I don't think more knowledge about the issue is going to significantly change the solution. We have reached the diminishing returns level of knowledge about AI with respect to decreasing existential risk of said AI.
This is just wrong. The only difference between possible and likely is the probability distribution, and we know how to reason with probability distributions. If Ben has an argument for why the probability distribution is SO small that even multiplied by 'hard takeoff, universe is paperclips, end of existence' comes out below the "AGI without Friendly" route, well, he should articulate it and provide evidence. Without certainty that the chances are very low, he should accept the Scary Idea.
Preventing abuse of AGI and preventing uFAI takeoff scenarios are not mutually exclusive, you can and should attempt to prevent both.
Mostly claims and arguments that a proof of friendliness is impossible. In order to argue that "provably safe AGI isn't feasible, we should instead develop unpredictable but I-don't-see-the-danger AGI" you pretty much need an Incompleteness Proof; that there IS NO proof of friendliness, not that there isn't one yet. If you believe that friendliness proof is probably impossible, you shouldn't work on AGI at all, instead of working on possibly-unfriendly AI. That Ben came to the conclusion that work should continue, rather than halt entirely, suggests he is motivated to justify his own work rather than engage with his beliefs about the Scary Idea.
The Scary Idea that Ben outlined is "The stakes are so high that 'unlikely' is not good enough; we need 'surer than we've ever been'. Anything less is too dangerous." and his refutations have amounted to "I don't know for sure, but I don't think it's likely".
In essence, he hasn't refuted the argument, but instead made it scarier. If AI developers can see "stakes are very high" and "there is a small chance", and argue against "the stakes are high enough that a small chance is too much chance", then uFAI is that much more likely.
It does seem like a pretty different thing to me. A lot of things are possible, but only a few are likely.