Here I argue that following the Maxipok rule could have truly catastrophic consequences.
Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."
And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.
I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)
First article TL;DR: space colonisation will produce star wars and result in enormous sufferings, that is s-risk.
My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction. I also hold an unshared opinion that death is the worst form of sufferings, as it is really bad. Pain-Sufferings are part of life and are ok, if they are diluted by much larger pleasure. Surely space wars are possible (without singleton), but life is intrinsically good, and most time there will be no wars, but some form of very sophisticated space pleasures. They will dilute sufferings from wars.
But I also don't share the Maxipoc interpretation that we should start space colonisation as soon as possible to get maximum number of possible people into existence. Firstly, all possible people exist somewhere else in the infinite multiverse. Also, it better to be slow but sure (is it correct expression?)
"My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction." But the goal of avoiding an x-catastrophe is to reach technological maturity, and reaching technological maturity would require space colonization (to satisfy the requirement that we have "total control" over nature). Right?