A while ago I wrote briefly on why the Singularity might not be near and my estimates badly off. I saw it linked the other day, and realized that pessimism seemed to be trendy lately, which meant I ought to work on why one might be optimistic instead: http://www.gwern.net/Mistakes#counter-point
(Summary: long-sought AI goals have been recently achieved, global economic growth & political stability continues, and some resource crunches have turned into surpluses - all contrary to long-standing pessimistic forecasts.)
It's always possible that running what we think of as a human intelligence requires a lot less actual computation than we seem to generally assume. We could already have all the hardware we need and not realize it.
I remember reading somewhere that many computer applications are accelerating much faster than Moore's law because we're inventing better algorithms at the same time that we're inventing faster processors. The thing about algorithms is that you don't usually know that there's a better one until somebody discovers it.
Kurzweil has an example of a task with 43,000x speedup over some period, more than Moore's Law, that is often mentioned in these discussions, and might be what you're thinking of. It was for one very narrow task, cherrypicked from a paper as the one with by far the greatest improvement. It's an extremely unrepresentative sample selected for rhetorical effect. Just as Kurzweil resolves ambiguity overwhelmingly in his favor in evaluating his predictions, he selects the most extreme anecdotes he can find. On the other hand, in computer chess and go software p... (read more)