I think wargames would be a useful tool for exploring AGI development scenarios. More specifically, I mean wargames of the kind used to train military officers in strategy. I think this format would help explore things like timelines and takeoff scenarios, in a way that is both concrete and transparent. 

The thought cropped up because I have been popping into the AMA/Discussion post for Late 2021 MIRI Conversations over the course of the last week. Over the course of the MIRI conversations, and again in the AMA, I saw a lot of the tactic of offering or requesting concrete situations to test or illustrate some point. Wargaming feels to me like a good way to generate concrete situations.  I think we could answer this question from So8res:

What's a story, full of implausibly concrete details but nevertheless a member of some largish plausible-to-you cluster of possible outcomes, in which things go well? (Paying particular attention to how early AGI systems are deployed and to what purposes, or how catastrophic deployments are otherwise forestalled.)

Over the same period I was reading the MIRI Conversations I was reading/re-reading a series of posts on wargaming at War on the Rocks, which I was linked to from the comments in a book-notes post about military innovation.

The oldest article of the lot focuses on the usefulness of wargames for teaching and learning; the original motivation for the author was how poorly they thought teaching a course on Thucydides went. As a highlight, I would like to point out a case where they considered a decision that had historically poor outcomes, but gained awareness of good motivations:

Remarkably, four of the five Athenian teams actually attacked Syracuse on Sicily’s east coast! As they were all aware that such a course had led to an Athenian disaster 2,500 years before, I queried them about their decision. Their replies were the same: Each had noted that the Persians were stirring, which meant there was a growing threat to Athens’ supply of wheat from the Black Sea. As there was an abundance of wheat near Syracuse, each Athenian team decided to secure it as a second food source (and simultaneously deny it to Sparta and its allies) in the event the wheat from the Black Sea was lost to them. Along the way, two of the teams secured Pylos so as to raise helot revolts that would damage the Spartan breadbasket. Two of the teams also ended revolts in Corcyra, which secured that island’s fleet for Athenian purposes, and had the practical effect of blockading Corinth. So, it turns out there were a number of good strategic reasons for Athens to attack Syracuse. Who knew? Certainly not any War College graduate over the past few decades.

The article also references benefits from playing the game Diplomacy, which has been mentioned around here for similar purposes for a long time.

There was also a recent-history overview of wargaming in the US Defense arena, which links to a bunch of examples and extolls wargaming in a competitive format to develop the US Defense talent pool. I included it mostly for the links to real-world examples, which include but are not limited to: games about specific, current strategic problems; games about integrating AI into warfare; some historical cases from the tactical to the strategic level.

The part that persuaded me we could directly lift from wargames to explore AGI development is the use of wargames for exploring future wars. The important thing is the inclusion of capabilities which don't exist yet, reasoning about which is notoriously hard. A highlight here is the kind of stuff people learn from the games:

First, all games are competitive and involve teams fighting other teams. There is a big difference between fighting an algorithm or scenario and fighting another human being. Fighting other people highlights fog, friction, uncertainty, and how new technologies risk compounding their effects.

Second, the games are designed using social science methods to analyze the difference between control and treatment groups. That is, participants start with a baseline game that involves current capabilities, and then another group fights with new capabilities. This allows the designers to assess the utility of new concepts and capabilities like manned-unmanned, teaming, deception, and various technologies associated with swarming.

Finally I read a walkthrough of one of these future games and its consequences, which provided me more of a sense of how such a game would operate. This article gives a concrete example what I think wargames could give us:

  • the outcome of a procedure
  • for generating concrete situations
  • including capabilities which do not exist yet
  • against which we can compare our models of the future.
     

 The setup looked like this:

The wargames were played by six student teams, of approximately five persons each. There were three red teams, representing Russia, China, and North Korea; combatting three blue teams representing Taiwan, Indo-Pacific Command (Korea conflict) and European Command. All of these teams were permitted to coordinate their activities both before the conflict and during. Interestingly, although it was not part of the original player organization the Blue side found it necessary to have a player take on the role of the Joint Staff, to better coordinate global activities.

Prior to the wargame, the students were given a list of approximately 75 items they could invest in that would give them certain advantages during the game. Nearly everything was on the table, from buying an additional carrier or brigade combat team, to taking a shot at getting quantum computing technology to work. Each team was given $200 billion to invest, with the Russians and Chinese being forced to split their funding. Every team invested heavily in hypersonic technology, cyber (offensive and defensive), space, and lasers. The U.S. team also invested a large sum in directed diplomacy. If they had not done so, Germany and two other NATO nations would not have shown up for the fight in Poland. Showing a deepening understanding of the crucial importance of logistics, both red and blue teams used their limited lasers to defend ports and major logistical centers.

Because the benefits of quantum computing were so massive, the American team spent a huge amount of its investment capital in a failed bid for quantum dominance. In this case, quantum computing might resemble cold fusion — ten years away and always will be. Interestingly, no one wanted another carrier, while everyone invested heavily in artificial intelligence, attack submarines, and stealth squadrons. The U.S. team also invested in upgrading logistics infrastructure, which had a substantial positive impact on sustaining three global fights.

The short, short version of what I imagine is expanding the invest-for-advantages and diplomacy sections of this game, at the expense of the force-on-force section, since fighting the AGI really isn't the point.

Some examples of the kinds of things we might be able to specify concretely, and then game out, based on the conversations from LessWrong I have read about timelines and takeoff scenarios:

  • The number|funding of orgs working on alignment, or on AI in general. This could be converted from the nation-states and their budgets in the military games.
  • The number|talent|distribution of researchers, which could be converted from the forces elements of the military games.
  • Introduction of good|bad government regulation, which might be convertible from something like environmental conditions, or more procurement focused games which make you deal with your domestic legislature as a kind of event generator.
  • Alignment of researchers|orgs|governments, for which I don't know a direct example to steal from (damnit, it's the hardest thing in the game too!). Maybe just some kind of efficiency or other general performance metric?
  • The secrecy|transparency of orgs, which is just the intelligence elements of some of these games.

Some actionable items for developing such a game:

  • Identify appropriate example games. For example, we probably want more details on diplomacy, economics, and procurement. We need much less on the details of combat.
  • I may be completely wrong about the kinds of things that are important here. One way to identify what we want: solicit opinions on what the 6-8 most important dimensions are from the people who have been thinking about this a lot, duplicating the intuition behind reasoned rules. From here, better decisions could be made about source games.
  • After these items are handled there are practical tasks like writing the AGI-gloss of the rules and scenario, beta testing, and the like.
New Comment
13 comments, sorted by Click to highlight new comments since:

Shahar Avin at CSER has been involved in creating and conducting a number of such games/exercises, and you could reach out to him for his gleanings from running them.

He has written a paper on this too, link here.

[-]hath140

Wargaming was used to great effect in history classes+electives at a private middle school I went to; will write a post with more details sometime. Wargames were probably the closet that those kids had ever gotten to a truly open-ended challenge, and there were some very outside-the-box tactics employed. Definitely a better teaching tool than everything else used in high school/middle school.

I'm a bit skeptical about the utility of wargaming AI because of how speculative it is. Military wargames seem like they would be much more grounded. Perhaps it's possible to mitigate this by running a large number of scenarios under different assumptions, but it just seems incredibly hard to design these scenarios in such a way so as to produce useful output.

I basically agree with you that the AI version is much more speculative than military versions. As a consequence I think the benefits will be different than the military wargames, but they will still be present. I think the difference looks like this:

  • Military wargames are mostly about raising the player's awareness of the dimensions of an established military model. In cases where we have a pretty good model, the game has some predictive power in the scenario described.
  • We don't have good models for the AGI version; instead we would be generating plausible-but-concrete situations through the game against which we can compare our models, such as they are. This wouldn't have any predictive power, at this stage; I really just think it would make deeper conversations faster and more transparent, which seems worth it.

I think a factor which might pay big dividends is that the game lets us concretely explore the interplay of multiple dimensions at once - almost all the discussion I have read constrains the subject to one dimension at a time, in the name of clarity.

Because the benefits of quantum computing were so massive

Please elaborate. I'm aware of Grover's algorithm, Shore's algorithm, and quantum communication, and it's not clear that any of these pose a significant threat to even current means of military information security / penetration.

[-]lc30

It poses a threat because the military moves much slower than whatever you're naively assuming is ubiquitious COMSEC. Many CIA assets over the last twenty years, even some today, sent their communiques through channels protected mostly by Quantum-breakable encryption. If China/Russia got a quantum computer now (hell, probably even 15 years from now), it would be almost immediately followed by volleys of executions of our spies.

This is an element in the game's resolution which wasn't described, so I don't actually know. If I were to guess based on the level of abstraction used in games like this, it might just be a strong assumption of quantum supremacy that caches out as a series of advantages like:

  • Your communications are completely secure against any faction which does not also have Quantum Computing.
  • Your attempts to penetrate the communication of any faction without Quantum Computing are 25% more likely to succeed.
  •  Your available FLOPs increase 25% after Quantum Computing.

I think this reflects the assumptions which underly the game; this is one of the things we would want to be able to vary in order to help with exploring AGI scenarios.

[+][comment deleted]30