Here I argue that following the Maxipok rule could have truly catastrophic consequences.
Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."
And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.
I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)
Is there anything we can realistically do about it? Without crippling the whole of biotech?
Perhaps have any bioprinter, or other tool, be constantly connected to a narrow AI, to make sure it doesn't accidentally, or intentionally , print ANY viruses, bacteria, or prions.