In the debate over AI development, two movements stand as opposites: PauseAI calls for slowing down AI progress, and e/acc (effective accelerationism) calls for rapid advancement. But what if both sides are working against their own stated interests? What if the most rational strategy for each would be to adopt the other's tactics—if not their ultimate goals?
AI development speed ultimately comes down to policy decisions, which are themselves downstream of public opinion. No matter how compelling technical arguments might be on either side, widespread sentiment will determine what regulations are politically viable.
Public opinion is most powerfully mobilized against technologies following visible disasters. Consider nuclear power: despite being statistically safer than fossil fuels, its development has been stagnant for decades. Why? Not because of environmental activists, but because of Chernobyl, Three Mile Island, and Fukushima. These disasters produce visceral public reactions that statistics cannot overcome. Just as people fear flying more than driving despite the latter being far more dangerous, catastrophic events shape policy regardless of their statistical rarity.
Any e/acc advocate with a time horizon extending beyond the next fiscal quarter should recognize that the most robust path to sustained, long-term AI acceleration requires implementing reasonable safety measures immediately. By temporarily accepting measured caution now, accelerationists could prevent a post-catastrophe scenario where public fear triggers an open-ended, comprehensive slowdown that might last decades. Rushing headlong into development without guardrails virtually guarantees the major "warning shot" that would permanently turn public sentiment against rapid AI advancement in the way that accidents like Chernobyl turned public sentiment against nuclear power.
Meanwhile, the biggest dangers from superintelligent AI—proxy gaming, deception, and recursive self-improvement—won't show clear evidence until it's too late. AI safety work focusing on current harms (hallucination, complicity with malicious use, saying politically incorrect things, etc.) fails to address the fundamental alignment problems with ASI. These problems may take decades to solve—if they're solvable at all. This becomes even more concerning when we consider that "successful" alignment might create dystopian power concentrations.
Near-term AI safety efforts, both technical and policy-based, might succeed at preventing minor catastrophes while allowing development to continue unabated toward existential risks. They are like equipping a car to not break down when travelling over rough terrain so that it can drive more smoothly off a cliff.
…
If any of that sounded like a good idea, note the date of posting and consider this your periodic reminder that AI safety is not a game. Trying to play 3D Chess with complex systems is a recipe for unintended, potentially irreversible consequences. (Edit: Yes, this is a weaksauce argument. Self rebuttal is here. See also: Holly Elmore's case for AI Safety Advocacy to the Public)
…But if you’re on break and just want a moment to blow off steam, feel free to have fun in the comments.
There's a gap in the Three Mile Island/Chernobyl/Fukushima analogy, because those disasters were all in the peaceful uses of nuclear power. I'm not saying that they didn't also impact the nuclear arms race, only that, for completeness, the arms race dynamics have to be considered as well.