Similarly appreciate the response!
I would say (3). Societal resilience is mandatory as threat systems proliferate and grow in power. You would need positive systems to counter them.
Regarding your points on writing in dystopia tone, I don't disagree. But it's easier to highlight an idea via narrative than bullet points. I personally like Mr. Smiles, he's my new mascot when I inevitably give up trying to solve AI alignment and turn to villainy.
Few comparisons/contrasts on allow vs not allow creation of bad systems:
I agree with Paul Christiano here. Let's call this rogue-AI preventing superintelligence Mr. Smiles. Let's assume that Mr. Smiles cannot find a "good" solution within a decade, and instead must temporarily spend much of his efforts preventing the creation of "bad" AGI.
How does Mr. Smiles ensure that no rogue actors, nation-states, corporations, or other organizations create AGI? Well, Mr. Smiles needs two things: sensors and actuators. The "mind" of Mr. Smiles isn't a huge problem, but the sensors and actuators are extremely problematic:
Is a system that optimizes for destruction an optimizing system?