I'm really not sure a human level AI would be at all that much of an advantage when it comes to developing technology at an accelerated rate, even at dramatically accelerated subjective time scales. Even in relatively narrow fields like nanotechnology, there are thousands of people investing a lot of time into working on it, not to mention all the people working in disparate disciplines which feed intellectual capital into the field. That's likely tens or hundreds of thousands of man hours a day invested, plus access to the materials needed to run experiments. Keep in mind that your AI is limited by the speed at which experiments can be run in the real world, and must devote a significant portion of its time to unrelated intellectual labor in order to fund both its own operation, and real-world experiments. In order to outpace human research under these constraints, the AI would need to be operating on timescales so fast that they may be physically unrealistic.
In short, I would say it's likely that your AI would perform extremely well in intelligence tests against any single human, provided it were willing to do the grunt work of really thinking about every decision. I just don't think it could outpace humanity.
I searched for articles on the topic and couldn't find any.
It seems to me that intelligence explosion makes human annihilation much more likely, since superintelligences will certainly be able to outwit humans, but that a human-level intelligence that could process information much faster than humans would certainly be a large threat itself without any upgrading. It could still discover programmable nanomachines long before humans do, gather enough information to predict how humans will act, etc. We already know that a human-level intelligence can "escape from the box." Not 100% of the time, but a real AI will have the opportunity for many more trials, and its processing abilities should make it far more quick-witted than we are.
I think a non-friendly AI would only need to be 20 years or so more advanced than the rest of humanity to pose a major threat, especially if self-replicating nanomachines are possible. Skeptics of intelligence explosion should still be worried about the creation of computers with unfriendly goal systems. What am I missing?