Besides Superintelligence, the latest "major" publication on the subject is Yudkowsky's Intelligence explosion microeconomics. There are also a few articles related to the topic at AI Impacts.
I found Intelligence Explosion Microeconomics less helpful for thinking about this than some older MIRI papers:
I'd like recommendations for articles dealing with slow and hard takeoff scenarios. I already found Yudkowsky's post 'hard takeoff', I know 'Superintelligence' has a section on it, and I think the Yudkowsky/Hanson debate mostly dealt with it.
Is there anything else?