I'm sure most of the readers of lesswrong and overcomingbias would consider a (edit: non-FAI) singleton scenario undesirable. (In a singleton scenario, a single political power or individual rules over most of humanity.)
Singleton could occur if a group of people developed Artificial General Intelligence with a significant lead over their competitors. The economic advantage from sole possession of AGI technology would allow the controllers of the technology the opportunity to gain a economic or even a political monopoly in a relatively short timescale.
This particular risk, as Robin Hanson pointed out, is less plausible if the "race for AGI" involves many competitors, and no competitor can gain too large of a lead over others. This "close race" scenario is more likely if there is an "open-source" attitude in the AGI community. Even if private organizations attempt to maintain exclusive control of their own innovations, one might hope that hackers or internal leaks would release essential breakthroughs before the innovators could gain too much of a lead.
Then, supposing AGI is rapidly acquired by many different powers soon after its development, one can further hope that the existence of multiple organizations with AGI with differing goals would serve to prevent any one power from gaining a monopoly using AGI.
This post is concerned with what happens afterwards, when AGI technology is more or less publicly available. In this situation, the long-term freedom of humanity is still not guaranteed, because disparities in access to computational power could still allow one power to gain a technological lead over the rest of humanity. Technological leads in the form of conventional warfare technologies are not as likely, and perhaps not even as threatening, as technological leads in the form of breakthroughs in cryptography.
In this information-dependent post-utopia, any power which manages to take control of the computational structures of a society would gain incredible leverage. Any military power which could augment their conventional forces with the ability to intercept all of their enemies' communications whilst protecting their own would enjoy an incredible tactical advantage. In the post-AGI world, the key risk for singleton is exclusive access to key-cracking technology.
Therefore, a long-term plan for avoiding singleton includes not only measures to promote "open-source" sharing of AGI-relevant technologies, but also "open-source" sharing of cryptographic innovations.
Since any revolutions in cryptography are likely to come from mathematical breakthroughs, a true "open-source" policy for cryptography would include measures to make mathematical knowledge available on an unprecedented scale. A first step to carrying out such a plan might include encoding of core mathematical results in an open-source database of formal proofs.
I don't think Robin particularly wants to die. Note that he's signed up for cryonics for example. Regarding the burning of the cosmic commons, it isn't clear to me that he is in favor of that, just that he considers it to be a likely result. Given his training as an economist, that shouldn't be that surprising.
Can you expand on this? I don't know if this is something that is a) agreed upon or b) sufficiently well-defined. If for example, AGI turns out to be not able to go foom, and we all get functioning cryonics and clinical immortality that seems like a pretty good outcome. I don't see how you get that non-singleton results must be horrific or even that most of them must be horrific. There may be definitional issues here in what constitutes horrific.
No, just the basic 'everybody dies and that which constitutes the human value system is obliterated without being met'.
But there are certainly issues regarding different premises which would prohibit useful discussion here without multiple-post level groundwork preparation.