Anna Salamon and I have finished a draft of "Intelligence Explosion: Evidence and Import", under peer review for The Singularity Hypothesis: A Scientific and Philosophical Assessment (forthcoming from Springer).
Your comments are most welcome.
Edit: As of 3/31/2012, the link above now points to a preprint.
I've read the paper, and while it mentions "intelligence explosion" a few times, they seem to be keeping that terminology taboo when it comes to the meat of the argument, which is what I think you were asking for.
Most of the material is phrased in terms of whether AIs will exhibit significantly more intelligence than human-based systems and whether human values will be preserved.
I think most people use "intelligence explosion" to mean something more specific than just exponential growth. But you're right that we should try and learn what we can about how systems evolve from looking at the past.
Yes, this is only a cosmetic issue with the paper, really.
Sure: explosions do also have to wind up going rapidly to qualify as such.