Drone Wars Endgame
Note: Updates at end of article on 9 Dec 2025 to reflect recent changes 1. Introduction It is probably becoming clear to anyone who follows war lately, especially the Ukraine conflict that automatic weapons, especially drones are rapidly changing what weapons and tactics are effective. The purpose of this article is to first consider how a fully autonomous force with current or near term tech would go against a state of the art conventional force, then to consider where the equilibrium would be on attack vs defense for autonomous forces using advanced but foreseeable tech. 1.1 Description of fully autonomous force using near term tech The idea is to have a force with as few different types of units as possible controlled in a distributed, mesh network fashion. The goal is for them to take over unlimited land territory, even in the face of tactical/strategic nukes, and without air superiority against very fast or high flying aircraft. Units are assumed to be fully autonomous, and emphasis is on economics as well as capability. For example many units are so cheap that they cannot effectively be countered by conventional missiles. 1.2 Unit Overview All units can fly, they are optimized to destroy all land based armor, slow flying aircraft, competing drones and humans. Combat units are supported by larger logistics units. 1.3 Communication and Defense All units are expected to communicate with point to point links (e.g. laser) and are hardened to varying degrees against microwave attacks. Given they are autonomous this would make jamming very difficult and electronic warfare not very effective. 1.4 Unit details 1.4.1 Recon and targeting drone The cheapest and smallest – it is a battery powered drone with video and targeting system. Recon units form a mesh network. They coordinate with missiles to defeat countermeasure such as flares and chaff from slow moving aircraft (helicopters etc) by observing and transmitting the position of the target from somewhat
Yes but how much! IMO this is important. From my point of view I already have a mildly superintelligent maths/equation manipulation assistant, with no meaningful self awareness that I notice. DeepMind is advancing science with a system with far less meta-cognition than a similarly capable human would have. Just like there is an "alignment tax" there can be a "lack of self awareness or meta-cognition penalty". While it is clear that superhuman AI will think about itself, it also seems clear that for a given level of capability an AI could have much less such abilities and habits than a human. The extent of this is unknown, task dependent and important.
Specifically what if you trained for both capabilities and lack of meta-cognitive like abilities? This could give you an idea of what the landscape looked like.