I think this is very fair! In a world where (i) AGI -> ASI is super fast; (ii) the military diffusion of ASI is exceptionally quick; and (iii) the marginal costs of scaling offensive capability is extremely low, then any sense of a limited/total war distinction does indeed break down and ASI will be the defining factor of military capability much, much sooner than we'd expect.
I think I'm instinctually sceptical of (iii) at least for a couple years after the advent of ASI though (the critical juncture for this strategy), where I think the modal outcome still looks like ASIs engage in routine cyberoperations all the time; are autonomously responsible for handling aerial warfare; and are fundamental to military operations/planning. But it's still really costly to engage in a total war scenario aimed at completely crippling a state such as China. It could play out as the need to engineer tons of drones/UAVs, the extremely costly development of a superweapon, the costs of having to secure every datacentre, etc. Within the period where we have to reckon with the effects of ASI, my guess is that the modal war - even with China - is still more a function of commitment than military advantage (which makes AGI realist rhetoric a risk amplifier).
Although I wouldn't say I'm hugely confident here, and I definitely don't feel very calibrated on just how likely this world is where the rapid diffusion of ASI also means very little/low marginal cost of scaling offensive capabilities. Though in is world, frankly, I don't think we avoid war at all unless there happen to be strong norms and sentiments against this kind of deployment. I guess the "maximise our ability to deploy ASI offensively" approach makes sense if the approach is "we must win the eventual war with China" built on relatively high credences we're in this rapid-diffusion-low-marginal-costs worlds. But given uncertainties about whether we're in this world; the potentially catastrophic consequences of war; and the fact that at least maintaining a competitive advantage isn't mutually exclusive from equally attempting strong norm-forming against war - the AGI realist rhetoric still makes me uneasy.
But I at least share that no other proposed approach seems great. I'm just conscious it seems not enough people in the relevant circles are even thinking about other approaches because they've already bought into a frame I think will only worsen the chances of catastrophic risk.
I think this is very fair! In a world where (i) AGI -> ASI is super fast; (ii) the military diffusion of ASI is exceptionally quick; and (iii) the marginal costs of scaling offensive capability is extremely low, then any sense of a limited/total war distinction does indeed break down and ASI will be the defining factor of military capability much, much sooner than we'd expect.
I think I'm instinctually sceptical of (iii) at least for a couple years after the advent of ASI though (the critical juncture for this strategy), where I think the modal outcome still looks like ASIs engage in routine cyberoperations all the time; are autonomously responsible for handling aerial warfare; and are fundamental to military operations/planning. But it's still really costly to engage in a total war scenario aimed at completely crippling a state such as China. It could play out as the need to engineer tons of drones/UAVs, the extremely costly development of a superweapon, the costs of having to secure every datacentre, etc. Within the period where we have to reckon with the effects of ASI, my guess is that the modal war - even with China - is still more a function of commitment than military advantage (which makes AGI realist rhetoric a risk amplifier).
Although I wouldn't say I'm hugely confident here, and I definitely don't feel very calibrated on just how likely this world is where the rapid diffusion of ASI also means very little/low marginal cost of scaling offensive capabilities. Though in is world, frankly, I don't think we avoid war at all unless there happen to be strong norms and sentiments against this kind of deployment. I guess the "maximise our ability to deploy ASI offensively" approach makes sense if the approach is "we must win the eventual war with China" built on relatively high credences we're in this rapid-diffusion-low-marginal-costs worlds. But given uncertainties about whether we're in this world; the potentially catastrophic consequences of war; and the fact that at least maintaining a competitive advantage isn't mutually exclusive from equally attempting strong norm-forming against war - the AGI realist rhetoric still makes me uneasy.
But I at least share that no other proposed approach seems great. I'm just conscious it seems not enough people in the relevant circles are even thinking about other approaches because they've already bought into a frame I think will only worsen the chances of catastrophic risk.