Eliezer, it may seem obvious to you, but this is the key point on which we've been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it's hardware to implement a new, better AI?
That's what self-improvement is, in a sense. See Tiling. (Also consider that improvement is an instrumental goal for a well-designed and friendly seed AI.)
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
Except that whoever decides the next AI's goals Wins, and the others Lose - the winner has their goals inst...
Followup to: Life's Story Continues
Imagine two agents who've never seen an intelligence - including, somehow, themselves - but who've seen the rest of the universe up until now, arguing about what these newfangled "humans" with their "language" might be able to do...