A while ago I wrote briefly on why the Singularity might not be near and my estimates badly off. I saw it linked the other day, and realized that pessimism seemed to be trendy lately, which meant I ought to work on why one might be optimistic instead: http://www.gwern.net/Mistakes#counter-point
(Summary: long-sought AI goals have been recently achieved, global economic growth & political stability continues, and some resource crunches have turned into surpluses - all contrary to long-standing pessimistic forecasts.)
Can you expand on this? I suspect this is true for some classes of problems, but I'm sufficiently uncertain that I'm intrigued by your claim about this being "surely" going to happen.
A lot of existing improvement trends would have to suddenly stop, along with the general empirical trend of continued software progress. On many applications we are well short of the performance of biological systems, and those biological systems show large internal variation (e.g. the human IQ distribution) without an abrupt "wall" visible, indicating that machines could go further (as they already have on many problems).