Eliezer Yudkowsky write a post on Facebook on on Oct 17, where I replied at the time. Yesterday he reposted that here (link), minus my responses. So I’ve composed the following response to put here:
I have agreed that an AI-based economy could grow faster than does our economy today. The issue is how fast the abilities of one AI system might plausibly grow, relative to the abilities of the entire rest of the world at that time, across a range of tasks roughly as broad as the world economy. Could one small system really “foom” to beat the whole rest of the world?
As many have noted, while AI has often made impressive and rapid progress in specific narrow domains, it is much less clear how fast we are progressing toward human level AGI systems with scopes of expertise as broad as those of the world economy. Averaged over all domains, progress has been slow. And at past rates of progress, I have estimated that it might take centuries.
Over the history of computer science, we have developed many general tools with simple architectures and built from other general tools, tools that allow super human performance on many specific tasks scattered across a wide range of problem domains. For example, we have superhuman ways to sort lists, and linear regression allows superhuman prediction from simple general tools like matrix inversion.
Yet the existence of a limited number of such tools has so far been far from sufficient to enable anything remotely close to human level AGI. Alpha Go Zero is (or is built from) a new tool in this family, and its developers deserve our praise and gratitude. And we can expect more such tools to be found in the future. But I am skeptical that it is the last such tool we will need, or even remotely close to the last such tool.
For specific simple tools with simple architectures, architecture can matter a lot. But our robust experience with software has been that even when we have access to many simple and powerful tools, we solve most problems via complex combinations of simple tools. Combinations so complex, in fact, that our main issue is usually managing the complexity, rather than including the right few tools. In those complex systems, architecture matters a lot less than does lots of complex detail. That is what I meant by suggesting that architecture isn’t the key to AGI.
You might claim that once we have enough good simple tools, complexity will no longer be required. With enough simple tools (and some data to crunch), a few simple and relatively obvious combinations of those tools will be sufficient to perform most all tasks in the world economy at a human level. And thus the first team to find the last simple general tool needed might “foom” via having an enormous advantage over the entire rest of the world put together. At least if that one last tool were powerful enough. I disagree with this claim, but I agree that neither view can be easily and clearly proven wrong.
Even so, I don’t see how finding one more simple general tool can be much evidence one way or another. I never meant to imply that we had found all the simple general tools we would ever find. I instead suggest that simple general tools just won’t be enough, and thus finding the “last” tool required also won’t let its team foom.
The best evidence regarding the need for complexity in strong broad systems is the actual complexity observed in such systems. The human brain is arguably such a system, and when we have artificial systems of this sort they will also offer more evidence. Until then one might try to collect evidence about the distribution of complexity across our strongest broadest systems, even when such systems are far below the AGI level. But pointing out that one particular capable system happens to use mainly one simple tool, well that by itself can’t offer much evidence one way or another.
I feel like this and many other arguments for AI-skepticism are implicitly assuming AGI that is amazingly dumb and then proving that there is no need to worry about this dumb superintelligence.
Remember the old "AI will never beat humans at every task because there isn't one architecture that is optimal at every task. An AI optimised to play chess won't be great at trading stocks (or whatever) and vice versa"? Well, I'm capable of running a different program on my computer depending on the task at hand. If your AGI can't do the same as a random idiot with a PC, it's not really AGI.
I am emphatically not saying that Robin Hanson has ever made this particular blunder but I think he's making a more subtle one in the same vein.
Sure, if you think of AGI as a collection of image recognisers and go engines etc. then there is no ironclad argument for FOOM. But the moment (and probably sooner) that it becomes capable of actual general problem solving on par with it's creators (i.e. actual AGI) and turns its powers to recursive self-improvement - how can that result in anything but FOOM? Doesn't matter if further improvements require more complexity or less complexity or a different kind of complexity or whatever. If human researchers can do it then AGI can do it faster and better because it scales better, doesn't sleep, doesn't eat and doesn't waste time arguing with people on facebook.
This must have been said a million times already. Is this not obvious? What am I missing?
Yes, a model of brain modularity in which the modules are fully independent end-to-end mechanisms for doing tasks we never faced in the evolutionary environment is pretty clearly wrong. I don't think anyone would argue otherwise. The plausible version of the modularity