Eliezer Yudkowsky write a post on Facebook on on Oct 17, where I replied at the time. Yesterday he reposted that here (link), minus my responses. So I’ve composed the following response to put here:
I have agreed that an AI-based economy could grow faster than does our economy today. The issue is how fast the abilities of one AI system might plausibly grow, relative to the abilities of the entire rest of the world at that time, across a range of tasks roughly as broad as the world economy. Could one small system really “foom” to beat the whole rest of the world?
As many have noted, while AI has often made impressive and rapid progress in specific narrow domains, it is much less clear how fast we are progressing toward human level AGI systems with scopes of expertise as broad as those of the world economy. Averaged over all domains, progress has been slow. And at past rates of progress, I have estimated that it might take centuries.
Over the history of computer science, we have developed many general tools with simple architectures and built from other general tools, tools that allow super human performance on many specific tasks scattered across a wide range of problem domains. For example, we have superhuman ways to sort lists, and linear regression allows superhuman prediction from simple general tools like matrix inversion.
Yet the existence of a limited number of such tools has so far been far from sufficient to enable anything remotely close to human level AGI. Alpha Go Zero is (or is built from) a new tool in this family, and its developers deserve our praise and gratitude. And we can expect more such tools to be found in the future. But I am skeptical that it is the last such tool we will need, or even remotely close to the last such tool.
For specific simple tools with simple architectures, architecture can matter a lot. But our robust experience with software has been that even when we have access to many simple and powerful tools, we solve most problems via complex combinations of simple tools. Combinations so complex, in fact, that our main issue is usually managing the complexity, rather than including the right few tools. In those complex systems, architecture matters a lot less than does lots of complex detail. That is what I meant by suggesting that architecture isn’t the key to AGI.
You might claim that once we have enough good simple tools, complexity will no longer be required. With enough simple tools (and some data to crunch), a few simple and relatively obvious combinations of those tools will be sufficient to perform most all tasks in the world economy at a human level. And thus the first team to find the last simple general tool needed might “foom” via having an enormous advantage over the entire rest of the world put together. At least if that one last tool were powerful enough. I disagree with this claim, but I agree that neither view can be easily and clearly proven wrong.
Even so, I don’t see how finding one more simple general tool can be much evidence one way or another. I never meant to imply that we had found all the simple general tools we would ever find. I instead suggest that simple general tools just won’t be enough, and thus finding the “last” tool required also won’t let its team foom.
The best evidence regarding the need for complexity in strong broad systems is the actual complexity observed in such systems. The human brain is arguably such a system, and when we have artificial systems of this sort they will also offer more evidence. Until then one might try to collect evidence about the distribution of complexity across our strongest broadest systems, even when such systems are far below the AGI level. But pointing out that one particular capable system happens to use mainly one simple tool, well that by itself can’t offer much evidence one way or another.
I appreciate your posting this here, and I do agree that any information from AlphaGo Zero is limited in our ability to apply it to forecasting things like AGI.
That said, this whole article is very defensive, coming up with ways in which the evidence might not apply, not coming up with ways in which it isn't evidence.
I don't think Eliezer's article was a knock-down argument, and I don't think anyone including him believes that. But I do think the situation is some weak evidence in favor for his position over yours.
I also think it's stronger evidence than you seem to think according to the framework you lay down here!
For example, a previous feature of AI for playing games like Chess or Go was to capture information about the structure of the game via some complex combination. However in AlphaGo Zero, very little specific information about Go is required. The change in architecture actually subsumes some amount of the combination of tools needed.
Again I don't think this is a knockdown argument or very strong or compelling evidence--but it looks as though you are treating it as essentially zero evidence which seems unjustified to me.
I disagree with the claim that "this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools."