Here I argue that following the Maxipok rule could have truly catastrophic consequences.
Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."
And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.
I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)
The problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so.
Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.
Is there anything we can realistically do about it? Without crippling the whole of biotech?