A while ago I wrote briefly on why the Singularity might not be near and my estimates badly off. I saw it linked the other day, and realized that pessimism seemed to be trendy lately, which meant I ought to work on why one might be optimistic instead: http://www.gwern.net/Mistakes#counter-point
(Summary: long-sought AI goals have been recently achieved, global economic growth & political stability continues, and some resource crunches have turned into surpluses - all contrary to long-standing pessimistic forecasts.)
One should also at least ponder an unexpected quick way to the intelligence explosion.
Maybe not a very probable, but a possible outcome. Might be odd, it hasn't happened already, like a dropped bomb which has not detonated. Yet.
I know. It is 1 percent or there about possibility, still it should be examined.
It's always possible that running what we think of as a human intelligence requires a lot less actual computation than we seem to generally assume. We could already have all the hardware we need and not realize it.
I remember reading somewhere that many computer applications are accelerating much faster than Moore's law because we're inventing better algorithms at the same time that we're inventing faster processors. The thing about algorithms is that you don't usually know that there's a better one until somebody discovers it.