Here I argue that following the Maxipok rule could have truly catastrophic consequences.
Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."
And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.
I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)
On super-plagues, I've understood the consensus position to be that even though you could create one that had really big death tolls, actual human extinction would be very unlikely. E.g.
Asteroid strikes do sound more plausible, though there too I would expect a lot of people to be aware of the possibility and thus devote considerable measures to ensuring the safety of any space operations capable of actually diverting asteroids.
I'm not an expert on bioweapons, but I note that the paper you cite is dated 2005, before the advent of synthetic biology. The recent report from FHI seems to consider bioweapons to be a realistic existential risk.