Nornagest comments on Existential Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (108)
If I had been one of those persons with the missile warning and red button, I wouldn't have pressed it even if I knew the warning was real. What use would it be to launch a barrage of nuclear weapons against normal citizens simply because their foolish leaders did so to you? It would only make things worse, and certainly wouldn't save anyone. Primitive needs to revenge can be extremely dangerous with todays technology.
Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.
With this in mind, it's not clear to me that it'd be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn't differentiate between actions in an actual and a simulated or abstracted world: if we don't make this distinction, following through with a launch on warning functions to screen off counterfactual one-sided nuclear attacks, and ought to ripple back through the causal graph to screen off all nuclear attacks (a world without a nuclear war in it is better along most dimensions than the alternative). It's not a decision I'd enjoy making, but every increment of uncertainty increases the weighting of the unilateral option, and that's something we really really don't want. Revenge needn't enter into it.
(This assumes a no-first-use strategy, which the USSR at Petrov's time claimed to follow; the US claimed a more ambiguous policy leaving open tactical nuclear options following conventional aggression, which can be modeled as a somewhat weaker deterrent against that lesser but still pretty nasty possibility.)
Of course, that all assumes that the parties involved are making a rational cost-benefit analysis with good information. I'm not sure offhand how the various less ideal scenarios would change the weighting, except that they seem to make pure MAD a less safe strategy than it'd otherwise be.