Good point - I'd missed that particular subtlety.
There's another flaw in the model which I presented, which is that I was only thinking about goals which conflict with other agents' goals. "Solve problem x for $5"-type tasks may not fall into that category, but may still require a lot of "intelligence" to solve. (Although narrow intelligence may be enough).
Anna Salamon and I have finished a draft of "Intelligence Explosion: Evidence and Import", under peer review for The Singularity Hypothesis: A Scientific and Philosophical Assessment (forthcoming from Springer).
Your comments are most welcome.
Edit: As of 3/31/2012, the link above now points to a preprint.