(e.g. AIXItl,
Moore's Law is not enough to make AIXI-style brute force work. A few more orders of magnitude won't beat combinatorial explosion.
Assuming the worst case on the algorithmic side, a standstill, the computational cost - even that of a combinatorial explosion - remains constant. The gap can only narrow down. That makes it a question of how many doubling cycles it would take to close it. We're not necessarily talking desktop computers here (disregarding their goal predictions).
Exponential growth with such a short doubling time with some unknown goal threshold to be reached is enough to make any provably optimal approach work eventually. If it continues.
While going through the list of arguments for why to expect human level AI to happen or be impossible I was stuck by the same tremendously weak arguments that kept on coming up again and again. The weakest argument in favour of AI was the perenial:
Lest you think I'm exaggerating how weakly the argument was used, here are some random quotes:
At least Moravec gives a glance towards software, even though it is merely to say that software "keeps pace" with hardware. What is the common scale for hardware and software that he seems to be using? I'd like to put Starcraft II, Excel 2003 and Cygwin on a hardware scale - do these correspond to Penitums, Ataris, and Colossus? I'm not particularly ripping into Moravec, but if you realise that software is important, then you should attempt to model software progress!
But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.
The weakest argument against AI was the standard:
Some of the more sophisticated go "Gödel, hence no AI!". If the crux of your whole argument is that only humans can do X, then you need to show that only humans can do X - not assert it and spend the rest of your paper talking in great details about other things.