"why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?"
One of Elizer's central (and I think, indisputable) claims is that a hand-made AI, after going recursive self-improvement, could be powerful in the real world while being WILDLY unpredictable in its actions. It doesn't have to be economically rational.
Given a paperclip-manufacturing AI, busily converting earth into grey goo into paperclips, there's no reason to believe we could communicate with it well enough to offer to help.
Followup to: Life's Story Continues
Imagine two agents who've never seen an intelligence - including, somehow, themselves - but who've seen the rest of the universe up until now, arguing about what these newfangled "humans" with their "language" might be able to do...