A while ago I wrote briefly on why the Singularity might not be near and my estimates badly off. I saw it linked the other day, and realized that pessimism seemed to be trendy lately, which meant I ought to work on why one might be optimistic instead: http://www.gwern.net/Mistakes#counter-point
(Summary: long-sought AI goals have been recently achieved, global economic growth & political stability continues, and some resource crunches have turned into surpluses - all contrary to long-standing pessimistic forecasts.)
Why I think we might have some real general AI surprisingly soon (before 2030), in spite of disillusionment with past AI projections: more smart people than ever are working with access to more resources to create AI prerequisites (which have economic value in their own right), although the number of smart people earnestly working on general AI directly hasn't increased as much, since it has become apparent that GAI is not low-hanging fruit.
The resources I'm thinking of:
faster/cheaper hardware
research is widely disseminated and cheaply available, in some cases including source code. if the research is good, it should compound.
slightly better programming software in general
collaboration software (skype/email/web, distributed version control, remote shells) vs. slow-paced journals/conferences.
However, all these were probably all anticipated 60 years ago (and are the basis for what now seem like overly optimistic year-2000 projections).