Is this so?
It seems to me there's a continuum between "humans carefully monitoring and controlling a weakish AI system" and "superintelligent AI-in-a-box cleverly manipulates humans in order to wreak havoc". It seems that as the world transitions from one to the other, at some point it will pass an "intelligence explosion" threshold. But I don't think that it ever passes a "humans are no longer in the loop" threshold.
Anna Salamon and I have finished a draft of "Intelligence Explosion: Evidence and Import", under peer review for The Singularity Hypothesis: A Scientific and Philosophical Assessment (forthcoming from Springer).
Your comments are most welcome.
Edit: As of 3/31/2012, the link above now points to a preprint.