N designers, each of complexity K, can collectively design something of maximum complexity NK, simply by dividing up the work.
Co-evolution, which may be thought of as a pair of designers interacting through their joint design product, and with an unlimited random stream as supplementary input, can result in very complex designs as well as in the designers themselves becoming more complex through information acquired in the course of the interaction.
It is amusing to look at the Roman Catholic theology of the Trinity, with this kind of consideration in mind. As I remember it, the Deity was "originally" a unipartite, simple God, who then became more complex by contemplating Himself and then further contemplating that Contemplation.
For this reason, I have never been all that impressed by the "refutation" of the first cause argument; the refutation being that it supposedly requires a complex "first cause" God, Who is Himself in need of explanation. God could conceivably have been simple (as simple as a Big Bang, anyways) and then developed (some people would prefer to say "evolved") under His own internal dynamics into something much more complex. Just as we atheists claim happened to the physical universe.
For this reason, I have never been all that impressed by the "refutation" of the first cause argument; the refutation being that it supposedly requires a complex "first cause" God, Who is Himself in need of explanation. God could conceivably have been simple (as simple as a Big Bang, anyways) and then developed (some people would prefer to say "evolved") under His own internal dynamics into something much more complex. Just as we atheists claim happened to the physical universe.
That simple "God" is the "God&q...
[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.