I've held off on posting the next rerun for a few days in case anybody else suggested something new in the discussion of how to run the AI FOOM Debate. After looking at the comments (with associated votes), and after looking through the sequence myself, I've decided to rerun one post a day. Most of these posts will be from Robin Hanson and Eliezer Yudkowsky, but there are a few posts from Carl Shulman and James Miller that will be included as well. This process will start tomorrow with "Abstraction, Not Analogy" by Robin Hanson.
Meanwhile, there are several posts written by Robin Hanson in the week or so leading up to the debate that provide a bit of background to his perspectives, which I have linked to below. They're all fairly short and relatively straightforward, so I don't think that they each merit a full blown individual discussion.
Relatedly, there have been numerous instances of individual humans taking over entire national governments by exploiting high-leverage, usually unethical, opportunities. A typical case involves a military general staging a coup or an elected leader legislating unlimited power unto himself. None of these instances look anything like firms competing for resources. Instead, we have single actors who are intelligent and opportunistic, unethical or morally atypicall, and risk-tolerant enough to accept the consequences of a failed coup.
A UFAI looks much more like a dictator than a firm.
All those require the an least implicit cooperation of a lot of other people, e.g., the general's army.