[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.
If you have observations, that is source of randomness, you can generate output of arbitrary complexity.
Now, let's step back and look at the whole picture. We were discussing a notion of 'complexity' such that evolved organisms gradually became more 'complex', and 'designers' which are themselves agents, possibly even evolved organisms, that can 'design' new things. We then consider that notion of 'complexity' as applied to 'designers' and 'designs' they can produce.
When informal notions are formalized, these formalizations should at least approximately relate to the original informal notions, otherwise we are changing the topic by bringing up these 'formalizations' and not actually making progress on understanding the original informal question.
K-complexity is something possessed by random noise. This notion does not reflect the measure of things by which evolution produced more 'complex' things than existed before (even if the 'things' produced by evolution are more K-complex than their early predecessors). And designers typically have access to randomness, which makes your model of 'designers' as programs without input wrong as well, hence conclusion about K-complexity of output incorrect, on top of K-complexity not adequately modeling the informal 'complexity'.
All very true. Which is one reason I dislike all talk of "complexity" - particularly in such a fuzzy context as debates with creationists.
But we do all have some intuitions as to what we mean by complexity in this context. Someone, I believe it was you, has claimed in this thread that evolution can generate complexity. I assume you meant something other than "Evolution harnesses mutation as a random input and hence as a source of complexity".
William Dembski is an "intelligent design theorist" (if that is not too much of an ... (read more)