I’ve been pretty confused by what it means for a technology or operation to be ‘defensive’. 

Technologies do things. What does it mean for a technology to, like, be something that stops bad things? Is an anti-missile missile the same as a missile? Can we come up with a classification system that feels a bit more systematic than Vitalik Buterin’s classification based on whether things are big or not? Can we extend the great work being done around resilience and adaptation to risks from AI to technology more broadly? 

And perhaps most crucially: if we can make technologies that stop bad things, does that mean that they’re inherently good and we should go ahead and def/acc making them? 

In this short blog post, I make three claims. 

1. Good ‘defensive’ technologies are status-quo preserving or entropy minimising. 

The most obvious view of defensive technologies is as tools that counter deliberate attempts to disrupt a valuable system. However, we often want to design systems that defend against disruption without a conscious perpetrator (e.g. sprinkler systems against accidental fires). 

For this reason, I’ve started thinking about ‘narrowly defensive’ technologies as a subcategory of ‘broadly defensive’ technologies (e.g. sprinklers) which broadly work to preserve the status-quo. For convenience, I’ll refer to the latter here as ‘defensive technologies’, but you could call them ‘anti-entropic’ or ‘status-quo preserving’ as preferred.

Specifically, this set of technologies either helps to 1) secure the status quo against interventions that would change it, 2) identify interventions that would change it or 3) return the situation to normal as quickly as possible after an intervention has changed it. 

2. Second, and as such, not all defensive technologies are inherently ‘good’. 

Making the status quo harder to change is not always a good thing. But it’s not inherently a bad thing, either. 

Most defensive technologies might defend things that most people agree are good. But some defensive technologies might defend things that you might think are bad, and/or might make it harder for the people who do follow your value system to change this. As a basic example, you can encrypt biosecurity information secrets to prevent people from hacking and revealing it, but you can also encrypt child pornography

Broadly, I think most defensive technologies are net-good, because I think there’s a lot of consensus about good directions for humanity. However, I think there might be times when defensive technologies might make things worse (like technologically-enhanced dictatorship, below). Carefully considering the potential negative consequences of developing defensive technologies and actively creating strategies to mitigate them remains essential.

Moreover, defensive technologies equip their beholders with power that can incentivise them to initiate conflict. If you know that you’ve got better bunkers, you might want to start more wars. It might even trigger your enemies to start an attack. 

3. If you want technology to make the world a better place, there are at least three other types of technology you should consider:

These include:

  • Offensive de-escalation: Developing ‘substitute technologies’ that achieve the intended outcomes with minimal suffering and maximal reversibility (gun → taser)
  • Coordination engineering: Developing technologies that incentivise coordination between different actors  (global markets → mutual interdependence)
  • Anti risk-compensation engineering: Countries with better bunkers might want to start more wars. Can we enforce global clauses that would ensure that, on starting a war illegitimately, all the bunkers of a country would self-destruct, or otherwise create defensive technologies that are only operational in situations where they are in fact being used for defense? 

Quick note before we begin: this post builds on existing theories of defensive technologies, but does not go into them in detail. To learn more about other theories, see:

  • Differential technologies: Differential technologies were suggested by Nick Bostrom and built on in papers such as this one by Sandberg, Dafoe and others that argues ‘certain technologies reduce risks from other technologies or constitute low-risk substitutes…it may be beneficial to delay risk-increasing technologies and preferentially advance risk-reducing defensive, safety, or substitute technologies.’ An interesting essay by Michael Nielsen explores some issues with the framing.
  • Def/acc: First set out by Vitalik Buterin, this idea seems to be a spin on Bostrom’s idea, but which places more emphasis on accelerating defense than deccelerating offense (Bostrom et al. emphasised both). Now the north star / marketing spin of the UK’s Entrepreneur First’s def/acc programme.
  • Societal Adaptation to Advanced AI: There's been a lot of interest around defensive technology related to AI, such as this paper, which defines defence as an intervention that reduces the likelihood that potentially harmful use of an AI system translates into harm. See also this recent work by Jamie Bernardi on concrete policy for defensive acceleration.

     

Claim 1: Defensive technologies are status-quo preserving

There are lots of technologies that I often see being described as ‘defensive’. What unites them is that they are useful for minimising uncertainty or protecting the status quo. To make that more clear, I’ve split them into three categories:

  • Secure: make sure that what we intend to happen does, reducing our uncertainty about the present
  • Assure: reduce our uncertainty about the future, allowing us to take actions which prevent change or assure positive change
  • Insure: improve our ability to ‘bounce back’  from uncertainty to a place of certainty 

Let’s break these down in more detail.

Securing technologies make sure that a pre-decided series of actions do in fact happen. For instance: 

  • Ensuring components of a machine work as intended given the pressures of that machine (heat shields in a rocket, verification systems holding up agreements on AI).
  • Ensuring components of a machine will work as intended given the pressures of the world in which the machine is deployed (encryption against hackers, vulnerability detection for red-teaming, stone bricks that let people build from nonflammable materials, artificial avalanches at night near busy ski slopes).
  • Creating technologies which allow the machine to be deployed in a different world which affords less hostile conditions (Starlink moving the physical infrastructure of wifi to space where it is less likely to break down is a good example Buterin raises; strategic decomputerisation could be another. It’s worth noticing that defensive displacement can happen both up and down the tech tree: after all, paper can’t be hacked).
  • Ensuring that if one section of the machine breaks, the other parts don’t also break by default (firewalls, fuseboxes, airgapping, firebreaks).

Assuring technologies help humans to work out what is going to happen, in areas that we don’t directly have control, so we can keep the status quo stable. For instance: 

  • Predicting new threats (by which I mean entropy-creating or negatively status-quo-destroying) to society ahead of time, such that we can take action to redress them or prevent them from happening at all (e.g. platforms supporting prediction markets)
  • Monitoring threats that are already happening (e.g. AI incident reportingAI risk monitoring)
  • Detecting threats that are already happening in a small way, but could get bigger fast (e.g. pandemic forecasting, better early detection for biological pathogens)

Finally, Insuring technologies help humans to ‘bounce back’ in scenarios where bad things do happen, to reassume the status quo as quickly as possible. For instance: 

  • Diverting resource to places where they can be retrieved before a catastrophe (e.g. debt, long-lasting or cheap pandemic preparedness technologies that can be bought ahead of a crisis, the bunkers in Fallout)
  • Mitigating the damage at the moment of crisis (artificial sprinklers that turn on automatically when a fire happens, airbags in a car crash, antilock breaking)
  • Diverting resources to places where they can be retrieved after a catastrophe (e.g. massively competent AI underwriters capable of accurately pricing risks, or offering cheaper premiums to communities at risk of climate change)

It’s worth noticing two things here. First, these technologies work in an ecosystem together, which we might think of as a defensive ecosystem. So effective prediction and effective red-teaming might support effective underwriting, or effective security interventions. Effective underwriting in turn might keep companies solvent in the case of failures, allowing for further investment in effective security development, generating revenues that flow into companies doing further prediction, and so on. This might take the form of a virtuous cycle, propagating through a network of organisations. 

Second—and perhaps obviously—defensive technologies do seem really good. I think to a large degree they could be. I’m skeptical, however, that they would always improve the world. In the next section I suggest why. 

Claim 2: Not all defensive technologies are inherently good

A lot of the applications of defensive technologies seem very close to inherently good. Humans aren’t that great at getting things right, or having what they want to happen happen, and most deviations from intended pathways are just bad and serve no purpose other than chaos. Technologies that secure against non-human hazards like fires, pandemics, and the natural breakdown of machines over time seem like very good things. Same with insurance. Perhaps one reason why I’m so optimistic about defensive technologies is that I’m a big believer that there are quite a few things that almost everyone agrees on and which should be preserved by default. 

However, some categories of defensive technologies might serve to protect human systems against the efforts of humans who would like to dismantle those systems. Indeed, this seems most true to the ‘defender’ framing, which implies an intentional attack. 

This would be an okay state of affairs, if the attackers/defenders framing was not subjective. 

Just as one man’s terrorist is another man’s freedom fighter, so can people use defensive technology to defend ‘the wrong thing’. When you’re developing a technology, you might have a very clear idea about who you want to develop that technology and why. This might work out, but it’s also possible that ideas might be copied by other groups and defend value systems different from your own. And when you’re publishing open science, it might be very hard to stop ‘defensive science’ from being used by other groups to create ‘defensive technology’ that undermine your values. We can’t always be confident that the work we do to support defensive science and technology will always be used to support values that we approve of.

Take an area of research that is as obviously defensive as it gets: research to make general purpose AI models more resilient to jailbreaking attacks, also known as AI robustness. This research is critical to ensuring that powerful systems are defended against malicious actors who might try to disable or misuse them, and consequently a key pillar of AI safety research for good reason.

Yet at the same time, AI robustness might assist nefarious aims. Imagine the case of a totalitarian dictator who uses snooping models pre-installed in national hardware to monitor how his subjects use the internet, or agents which ‘clean up’ undesirable opinions from online platforms. Their subjects dream of building digital spaces where they can plan their insurrection, but they cannot jailbreak the models or otherwise divert them from their rigid paths. ‘Defensive technology’ can serve to calcify the power difference between oppressor and oppressed. 

This might seem a slightly contrived example, but it remains the case that for almost any defensive technology you can imagine, there exists a negative potential application: 

  • Predictive technologies can predict insurrection.
  • Misinformation filters accelerate cultural homogenisation.
  • Encryption secures harmful information and could preserve the identity of people trafficking CSAM online.
  • Facial recognition enhances user privacy but can be used for surveillance.
  • Data anonymization safeguards privacy but can enable illicit markets by concealing illegal activities.
  • Decentralised AI systems (one of Buterin’s examples of defensive technologies) might become a refuge for malicious actors trying to subvert restrictions on training powerful models.

Basically, if you’re unhappy with the state of affairs you’re living in, then you probably don’t want anyone to develop technology that makes it harder to change that state of affairs. Whilst many people around the world might broadly support their regimes becoming harder to challenge, others might find this less desirable. But when you develop science or technology, it’s really hard to stop that information from exfiltrating out into the world. 

This isn’t to say that defensive technologies are never useful. There might be a lot of values that most humans agree on, and developing technologies to defend these is a robustly good thing. However, people should think carefully about both the values that they are attempting to protect and the extent to which these technologies might be used to protect values that contradict them. The situation is often more complicated than ‘helping defenders do their job is good’. 

Some cases where it still might be worthwhile releasing a defensive technology that could help malicious actors to defend values antithetical to yours:

  • If the malicious actor would otherwise attack you soon, and this might cause them to cooperate.
  • If the defensive technology would stop malicious actors getting hold of other, more powerful technologies, offensive or defensive
  • If it would defend a lot of people, and those that would steal it would harm relatively few.
  • If you were really, really good at working out who was going to use that technology  and trusted them (aka you’re designing a secure API for use in a secure facility).

Claim 3 (bonus): There are lots of other considerations that might be valuable

Defensive or ‘status-quo preserving’ technologies aren’t the only way to develop technologies that can improve the future. I’m interested in specific interventions that make the future better by making suffering less bad and less likely. 

I nearly considered ending the piece here: the following is more notational and uncertain. However, I’d be super interested in people’s comments on this (what’s most effective? Where are the low hanging fruit) so I’m including it a bonus.

Offensive de-escalation

Developing substitute technologies that achieve the intended outcomes with minimal suffering and maximal reversibility seems like a robustly good thing. These sorts of technologies have been well mapped out around weapons (guns → rubber bullets, tasers, bean-bag bullets etc.). But I haven’t seen as much literature around what substitutes would look like for cyberattacks, sanctions, landmines (e.g. ones that deactivate automatically after a period of time or biodegrade), missiles etc. Maybe this is something I should look out for? 

(Note: less harmful substitutes for offensive technologies may encourage greater use: see ‘anti-risk compensation engineering’ for thoughts on this, below).

Coordination engineering

I’m interested in the ways in which AI technologies might help to resolve conflicts and solve public goods problems. 

One area I’m interested in relates to AI-based conflict resolution systems. These might work at the interpersonal level (solicitor-agents), the inter-corporate level, or the international level. Consider the benefits of a system that was capable of organising complex multi-national coordination in conflict scenarios: 

  • They might be better than teams of humans, who might be hard pressed to understand all the information of a complex evolving scenario all the time and make good decisions
  • They might help countries to organise more complex treaties more easily, thereby ensuring that countries got closer to their ideal arrangements between two parties
  • It might be that there are situations in which two actors are in conflict, but the optimal arrangement between the two groups relies on coordination from a third or a fourth, or many more. The systems could organise these multilateral agreements more cost-effectively.

I think that these systems could become quite effective at searching the problem space to find the optimal outcome for all parties. This might take conflicts from the point of ‘costly, but negotiation is out of reach’ to ‘resolvable’. They might also be able to review or flag potential unintended side effects of clauses, helping to reduce the likelihood of future conflicts.

Maybe at the larger scale this looks more like markets. For instance, consider a global platform that tracks carbon emissions in real-time and automatically rewards or penalises countries and companies based on their carbon footprint. In this version, rather than mapping actors (countries) preferences onto a specific treaty, they’re projected onto a market which rewards and penalises actors in real time. Maybe these sorts of systems could help incentivise actors to reduce emissions and coordinate global environmental efforts.

Anti risk-compensation engineering

Instead of thinking about constitutional AI, let’s think about making the human use case constitutional. In this world, if a system detected that it was being used to perpetrate harms, it might shut down or limit the user’s ability to deploy it. 

For instance, if a country develops highly advanced autonomous weapon systems, they might become more likely to escalate conflicts, believing they have an upper hand. A global safeguard could ensure that if such weapons are used to provoke unjust conflict, they automatically malfunction or turn off, maintaining a balance of power.

In practice, I think doing this in examples such as the above would be extremely difficult, as organisations are very unlikely to accept technologies with backdoors, or which might render them useless in certain situations. However, there might still be examples in different domains where this was appropriate, or certain geopolitical scenarios that this would be relevant to. 

Conclusion

Defensive technology is technology that defends something, and whether that something is good or bad is often a difficult question. We cannot abrogate the responsibility of thinking through the difficult value trade-offs simply by saying that ‘defense is good’. Specific examples might be, but they should have robust theories as to why that is the case. Most importantly, it might not always be useful to ‘accelerate’ through these value debates, lest we wind up helping actors to defend values that we never subscribed to in the first place. 

If there’s one thing you should take away from this, it’s that building technologies that differentially improve the future might be really hard. It’s important to have clear theories of change for the technologies you build, a clear view of the negative consequences that they might have, and strategies for mitigating them. On the other hand, there may be options—like coordination engineering, anti-risk-compensation engineering, and substitute technologies—that present ways to improve the future beyond defensive technologies.

 

Thanks to Jamie Bernardi, Jack Miller and Tom Reed for their comments on this piece.

New Comment
1 comment, sorted by Click to highlight new comments since:

In terms of preserving a status quo in an adversarial conflict, I think a useful dimension to consider is First Strike vs. Second Strike. The basic idea is that technologies which incentivise a preemptive strike are offensive, whereas technologies which enable retaliation are defensive.

However, not all status-quo preserving technologies are defensive. Consider disruptive[1] innovations which flip the gameboard. Disruptive technologies are status-destroying, but can advantage the incumbent or the underdog. They can make attacks more or less profitable. I think "disruptive vs sustaining" is a different dimension that should be considered orthogonal to "offensive vs defensive".

But I haven’t seen as much literature around what substitutes would look like for cyberattacks, sanctions, landmines (e.g. ones that deactivate automatically after a period of time or biodegrade), missiles etc.

Here's a video by Perun, a popular YouTuber who makes hour-long PowerPoint lectures about defense economics. In it, cyberattack itself is considered a substitute technology used to achieve political aims through an aggressive act less provocative than war.

They might help countries to organise more complex treaties more easily, thereby ensuring that countries got closer to their ideal arrangements between two parties…. It might be that there are situations in which two actors are in conflict, but the optimal arrangement between the two groups relies on coordination from a third or a fourth, or many more. The systems could organise these multilateral agreements more cost-effectively.

Smart treaties have existed for centuries, though they didn't involve AI. Western powers used them to coordinate against Asian conquests. Of course, they didn't find the optimal outcome for all parties. Instead, they enabled enemies to coordinate the exploitation of a mutual adversary.


  1. I'm using the term "disruptive" the way Clayton Christenson defined it in his book The Innnovator's Dilemmma where "disruptive technologies" are juxtiposed against a "sustaining technology". ↩︎