I'm sure most of the readers of lesswrong and overcomingbias would consider a (edit: non-FAI) singleton scenario undesirable. (In a singleton scenario, a single political power or individual rules over most of humanity.)
Singleton could occur if a group of people developed Artificial General Intelligence with a significant lead over their competitors. The economic advantage from sole possession of AGI technology would allow the controllers of the technology the opportunity to gain a economic or even a political monopoly in a relatively short timescale.
This particular risk, as Robin Hanson pointed out, is less plausible if the "race for AGI" involves many competitors, and no competitor can gain too large of a lead over others. This "close race" scenario is more likely if there is an "open-source" attitude in the AGI community. Even if private organizations attempt to maintain exclusive control of their own innovations, one might hope that hackers or internal leaks would release essential breakthroughs before the innovators could gain too much of a lead.
Then, supposing AGI is rapidly acquired by many different powers soon after its development, one can further hope that the existence of multiple organizations with AGI with differing goals would serve to prevent any one power from gaining a monopoly using AGI.
This post is concerned with what happens afterwards, when AGI technology is more or less publicly available. In this situation, the long-term freedom of humanity is still not guaranteed, because disparities in access to computational power could still allow one power to gain a technological lead over the rest of humanity. Technological leads in the form of conventional warfare technologies are not as likely, and perhaps not even as threatening, as technological leads in the form of breakthroughs in cryptography.
In this information-dependent post-utopia, any power which manages to take control of the computational structures of a society would gain incredible leverage. Any military power which could augment their conventional forces with the ability to intercept all of their enemies' communications whilst protecting their own would enjoy an incredible tactical advantage. In the post-AGI world, the key risk for singleton is exclusive access to key-cracking technology.
Therefore, a long-term plan for avoiding singleton includes not only measures to promote "open-source" sharing of AGI-relevant technologies, but also "open-source" sharing of cryptographic innovations.
Since any revolutions in cryptography are likely to come from mathematical breakthroughs, a true "open-source" policy for cryptography would include measures to make mathematical knowledge available on an unprecedented scale. A first step to carrying out such a plan might include encoding of core mathematical results in an open-source database of formal proofs.
This leads to a variety of questions:
First, regarding the fast fooming issue:
Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for the probability in 6 times the probability in 3 (ETA: fixed), since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.
This is not obvious. Moreover, what is to prevent the AGIs from working together in a way that makes humans irrelevant? If there's a paperclip maximizer and a stamp maximizer, but they can agree to cooperate (afterall, there's very little overlap between the elements in stamps and the elements in metal paperclips) and humans are just as badly off then as if only one of them were around. Multiple strong AIs that don't share human values means we have even more intelligent competitors for resources in our approximate light cone. Increasing the number of competing AIs might make it less likely for humans to survive in any way that we'd recognize as something we want.
Not really. Military organizations rarely need to use cutting edge cryptography. Most interesting crypographic protocols are things like public key crypto which are useful when one has a large number of distinct economic actors who can't be trusted and don't have secure communication channels. Armies have things like centralized command structures which allow one to do things like distribute one time pads or have prior agreed upon signals which make most of these issues irrelevant. There situations where armies need cryptographic protocols are situations like World War 2, where one has many small groups that one needs to communicate securely with and one doesn't have easy physical access to them. In that sort of context, modern crypto can help. But, large scale ground wars and similar situations seem like an unlikely form of warfare.
Hang on. Are we now talking about security in general? That's a much broader set of questions than just cryptography. I don't know if it is in general more difficult to defend against such attacks. Most of those attacks have an easy answer: keep systems off line. Attacks through the internet can cause economic damage, but it is difficult for them to cause military damage unless high priority systems are connected to the internet, which is just stupid.
Can you expand on this claim?
Has anyone ever suggested a global ban on cryptography or anything similar? Why does that seem like a scenario worth worrying about?
(Emphasis added.) I think you've got that backwards? 1 is P(fast FOOM), 6 is P(fast FOOM | P=NP OR NP in BQP), and you're arguing that P=NP or NP in BQP would make fast FOOM more likely, so 6 should be higher. That, or 6 should be changed to ( (fast FOOM) AND (P=NP OR NP in BQP) ). Yeah?