While the internal complexity of software has increased in pace with hardware, the productive complexity has increased only slightly; I am much more impressed by what was done in software twenty years ago than what is being done today, with a few exceptions. Too many programmers have adopted the attitude that the efficiency of their code doesn't matter because hardware will improve enough to offset the issue in the timeframe between coding and release.
While going through the list of arguments for why to expect human level AI to happen or be impossible I was stuck by the same tremendously weak arguments that kept on coming up again and again. The weakest argument in favour of AI was the perenial:
Lest you think I'm exaggerating how weakly the argument was used, here are some random quotes:
At least Moravec gives a glance towards software, even though it is merely to say that software "keeps pace" with hardware. What is the common scale for hardware and software that he seems to be using? I'd like to put Starcraft II, Excel 2003 and Cygwin on a hardware scale - do these correspond to Penitums, Ataris, and Colossus? I'm not particularly ripping into Moravec, but if you realise that software is important, then you should attempt to model software progress!
But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.
The weakest argument against AI was the standard:
Some of the more sophisticated go "Gödel, hence no AI!". If the crux of your whole argument is that only humans can do X, then you need to show that only humans can do X - not assert it and spend the rest of your paper talking in great details about other things.