When Eliezer proposes "turn all the GPUs to Rubik's cubes", this pivotal act I think IS outright violence. Nanotechnology doesn't work that way (something something local forces dominate). What DOES work is having nearly unlimited drones because they were manufactured by robots that made themselves exponentially, making ASI equipped parties have more industrial resources than the entire worlds capacity right now.
Whoever has "nearly unlimited drones" is a State, and is committing State Sponsored Violence which is OK. (By the international law of "whatcha gonna do about it")
So the winners of an AI race with their "aligned" allied superintelligence actually manufactured enough automated weapons to destroy everyone else's AI labs and to place the surviving human population under arrest.
That's how an AI war actually ends. If this is how it goes (and remember this is a future humans "won") this is what happens.
The amount of violence before the outcome depends on the relative resources of the warring sides.
ASI singleton case : nobody has to be killed, billions of drones using advanced technology attack everywhere on the planet at once. Decision makers are bloodlessly placed under arrest, guards are tranquilized, the drones have perfect aim so guns are shot out of hands and engines on military machines hit with small shaped charges. The only violence where humans die is in the assaults on nuclear weapons facilities, since math.
Some nukes may be fired on the territory of the nation hosting the ASI, this kills a few million tops, "depending on the breaks".
Two warring parties case, one party's ASI or industrial resources are significantly weaker : nuclear war and prolonged endless series of battles between drones. Millions or billions of humans killed as collateral damage, battlefields littered with nuclear blast craters and destroyed hardware. "Minor inconvenience" for the winning side since they have exponentially built robotics, the cleanup is rapid.
Free for all, everyone gets ASI, it's not actually all that strong in utility terms : Outcomes range from a world of international treaties similar to now and a stable equilibria or a world war that consumes the earth, most humans don't survive. Again, it's a minor inconvenience for the winners. No digital data is lost, exponentially replicated robotics mean the only long term cost is a few years to clean up.
I think there is some value in exploring the philosophical foundations of ethics, and LessWrong culture is often up for that sort of thing. But, it's worth saying explicitly: the taboo against violence is correct, and has strong arguments for it from a wide variety of angles. People who think their case is an exception are nearly always wrong, and nearly always make things worse.
(This does not include things that could be construed as violence but only if you stretch the definition, like supporting regulation through normal legal channels, or aggressive criticism, or lawsuits. I think those things are not taboo and would support some of them.)
Someone or some group with objectionable moral/ethical/social positions, seen by a substantial fraction of society, can nonetheless win lawsuits and rely on the state's violent means for enforcement of the awards.
e.g. a major oil company winning a lawsuit against activists, forcing environmental degradation of some degree.
or vice versa,
e.g. nudist lifestyle and porn activists winning lawsuits against widely supported restrictions on virtual child porn, forcing a huge expansion in the effective grey area of child porn.
The losing side being punis... (read more)