While going through the list of arguments for why to expect human level AI to happen or be impossible I was stuck by the same tremendously weak arguments that kept on coming up again and again. The weakest argument in favour of AI was the perenial:
- Moore's Law hence AI!
Lest you think I'm exaggerating how weakly the argument was used, here are some random quotes:
- Progress in computer hardware has followed an amazingly steady curve in the last few decades [16]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Vinge, 1993)
- Computers aren't terribly smart right now, but that's because the human brain has about a million times the raw power of todays' computers. [...] Since computer capacity doubles every two years or so, we expect that in about 40 years, the computers will be as powerful as human brains. (Eder 1994)
- Suppose my projections are correct, and the hardware requirements for human equivalence are available in 10 years for about the current price of a medium large computer. Suppose further that software development keeps pace (and it should be increasingly easy, because big computers are great programming aids), and machines able to think as well as humans begin to appear in 10 years. (Moravec, 1977)
At least Moravec gives a glance towards software, even though it is merely to say that software "keeps pace" with hardware. What is the common scale for hardware and software that he seems to be using? I'd like to put Starcraft II, Excel 2003 and Cygwin on a hardware scale - do these correspond to Penitums, Ataris, and Colossus? I'm not particularly ripping into Moravec, but if you realise that software is important, then you should attempt to model software progress!
But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.
The weakest argument against AI was the standard:
- Free will (or creativity) hence no AI!
Some of the more sophisticated go "Gödel, hence no AI!". If the crux of your whole argument is that only humans can do X, then you need to show that only humans can do X - not assert it and spend the rest of your paper talking in great details about other things.
You know what I'd like to see? A strong argument for or against human level AI.
I read the SI's brief on the issue, and all I discovered was that they did a spectacularly good job of listing all of the arguments for and against and demonstrating why they're all manifestly rubbish. The overall case seemed to boil down to "There's high uncertainty on the issue, so we should assume there's some reasonable chance". I'm not saying they're wrong, but it's a depressing state of affairs.