I'm sure most of the readers of lesswrong and overcomingbias would consider a (edit: non-FAI) singleton scenario undesirable. (In a singleton scenario, a single political power or individual rules over most of humanity.)
Singleton could occur if a group of people developed Artificial General Intelligence with a significant lead over their competitors. The economic advantage from sole possession of AGI technology would allow the controllers of the technology the opportunity to gain a economic or even a political monopoly in a relatively short timescale.
This particular risk, as Robin Hanson pointed out, is less plausible if the "race for AGI" involves many competitors, and no competitor can gain too large of a lead over others. This "close race" scenario is more likely if there is an "open-source" attitude in the AGI community. Even if private organizations attempt to maintain exclusive control of their own innovations, one might hope that hackers or internal leaks would release essential breakthroughs before the innovators could gain too much of a lead.
Then, supposing AGI is rapidly acquired by many different powers soon after its development, one can further hope that the existence of multiple organizations with AGI with differing goals would serve to prevent any one power from gaining a monopoly using AGI.
This post is concerned with what happens afterwards, when AGI technology is more or less publicly available. In this situation, the long-term freedom of humanity is still not guaranteed, because disparities in access to computational power could still allow one power to gain a technological lead over the rest of humanity. Technological leads in the form of conventional warfare technologies are not as likely, and perhaps not even as threatening, as technological leads in the form of breakthroughs in cryptography.
In this information-dependent post-utopia, any power which manages to take control of the computational structures of a society would gain incredible leverage. Any military power which could augment their conventional forces with the ability to intercept all of their enemies' communications whilst protecting their own would enjoy an incredible tactical advantage. In the post-AGI world, the key risk for singleton is exclusive access to key-cracking technology.
Therefore, a long-term plan for avoiding singleton includes not only measures to promote "open-source" sharing of AGI-relevant technologies, but also "open-source" sharing of cryptographic innovations.
Since any revolutions in cryptography are likely to come from mathematical breakthroughs, a true "open-source" policy for cryptography would include measures to make mathematical knowledge available on an unprecedented scale. A first step to carrying out such a plan might include encoding of core mathematical results in an open-source database of formal proofs.
Not an overwhelming horde of ridiculously high tech robotic killing machines capable of enforcing any will and a network of laser defenses ready to intercept the primitive nuclear devices that are the only thing their competition could hope to use to damage them with?
Or a swarm of nanobots? Maybe an engineered super virus?
If you have a friendly (to you) AGI and they don't then you win. That's how it works.
Implicit premise here that AGI will be as powerful as is often suggested here. Not everyone thinks that is likely. And there are some forms of AGI that are much less likely to be that useful without some time. The most obvious example would be if the first AGIs are uploads. Modifying uploads could be very difficult given the non-modular, very tangled nature of the human brain. In time, uploads would still likely be very helpful (especially if Moore's Law continues to ho... (read more)