Gaming the Future
Technologies for Intelligent Voluntary Cooperation
This sequence is an updated version of Gaming the Future, a Substack book Mark S. Miller, Christine Peterson and I published a while back.
It's much simplified and I was encouraged to share it here because it provides context because it explores many of themes articulated in the emerging d/acc community in-depth (see Vitalik's My Techno-optimism) and provides context on the AI grants we give at Foresight.
Summary
Opportunities for bright futures enabled by bio, nano, and AI are now within our reach. But technological proliferation also brings risks that threaten the very existence of civilization. To help civilization navigate this abyss, this book addresses three questions:
1. How can we help civilization cooperate better?
2. How can we help civilization defend itself better?
3. How can we help civilization do both - cooperation and defense - in light of AI?
Explore strategies, tools, and technologies for enabling voluntary cooperation across a diversity of intelligences. Let’s unlock Paretotropian futures of high technology in which valuing entities can pursue their highest function through iterative play.
Here is an overview of what this sequence covers in-depth:
Where to Start: Voluntary Cooperation From Different Lenses
Civilization is an inherited game shaped by those before you. If you’re happy your ancestors did not lock you into a future, should you leave everything up to future players? If only that was an option. Any move you make will affect the choices that future players have available to them. You must play your turn, and therefore, you must choose among games to pass on to future generations.
By choosing strategies of intelligent voluntary cooperation, you can set the game of civilization upon a path of rapidly increasing intelligence serving a diversity of goals. Whether you start from ethics, game theory, or history, something resembling voluntary cooperation emerges as a relatively robust principle for playing our civilizational game:
In terms of ethics, it’s undeniable that values differ across players of civilization. Each of us has subjective best guesses about the world. We disagree about what future to move to and about how to get there. Some want to grow a pristine garden; others want to explore new worlds; others seek pursuit of knowledge. These differences will become more apparent as players evolve. Some future players will have artificial minds. What they want may be very alien to human players.
In light of this ignorance, relying on voluntary interactions across players is a good heuristic to serve different goals. A voluntary action only depends on a player’s internal logic, leaving them “free” to engage or not engage in interactions. We tend to only consent to moves from which we expect benefit rather than harm. Such moves, which make at least one player better off without making anyone worse off, are called Pareto-preferred. As a rule of thumb, voluntary interactions gradually move civilization into Pareto-preferred directions, i.e., directions that benefit without harming.
This principle turns out to have a good historic track record. Human civilization is growing less violent over time and many things players care about, from health to education, are improving. Voluntarism enables cooperation but it does not, by itself, bring it about. It took thousands of years of institutional evolution to create complex systems of prices, property, and institutions that help players cooperate better. Instead of arguing about dividing the pie, they get better at growing it.
As the game continues, it becomes increasingly intelligent. One constraint on civilization’s intelligence is that each of its players plans mostly in ignorance of others’ plans. Institutions evolve to better coordinate across them by providing signals about what would be beneficial to do. Some players are humans, some are institutions themselves, and an increasing number will be software entities. Composed together by improving networks of voluntary cooperation, they increase the adaptive intelligence of civilization. In the pursuit of their highest values, they unlock new levels of play across the board.
What to Seek More Of: Improving Cooperation Via New Technologies
Approaching such futures requires progress in technologies of cooperation. Much of our ability to benefit from each other is still limited. Perhaps the biggest problems arise when there is a state of the world that we would all prefer to jump to, but lack the coordination to do so. A look at how institutions evolved to deal with these factors shows how to further diminish them. With contracts, players can make binding commitments to particular future actions and cooperate for mutual benefit.
Countless cooperative constellations are possible across the 7 billion players of civilization. We’ve unlocked many of them, but we can do even better. The internet secures the right to information, cryptocurrency grants monetary sovereignty, and smart contracts might democratize the right of contract.
Some coordination problems will remain tricky. Drafting mechanisms for a large number of people to find each other, speed up the bargaining process, and enforce a desired arrangement is extraordinarily difficult. But these efforts could unlock previously inconceivable layers of civilization. We won’t jump to such a cooperative world tomorrow, but we can gradually grow into it.
What to Seek Less Of: Upholding Voluntarism Given Technological Threats
As civilization evolves, players will unlock new capabilities. Biotechnology might give us healthier lives, nanotechnology might provide wasteless manufacturing, and AI might accelerate unprecedented discoveries across the board. Nevertheless, the same technologies could be leveraged to cause unprecedented destruction. The economics of fighting wars could lead to pervasive robotic enforcement via lethal autonomous weapons and surveillance.
When guarding against the downsides of powerful technologies, players must resist the temptations of solutions that create more problems than they solve. Statist solutions that centralize the capacity for violence without checks and balances are such a danger. Checks on U.S. power, the world’s leading military player, are decreasing, while checks on its rising rival, China, are near absent. Both have access to weapons that could nuke the playing field of civilization.
Decentralized defense systems that allow for multipolar monitoring and cross-checking can make their own dangers more visible. Cryptography can make them more privacy-preserving. Such systems are hard to envision but will emerge from today’s game. So called “black boxes” already provide indelible records for internal surveillance of automated systems. Smartphone cameras already democratize surveillance, making human enforcement more accountable.
Any desirable future will rely on computer security at every level, from hardware, to operating systems, to software, all the way to the user interface. Computer security is essential to de-risking cooperation that increasingly takes place virtually. It is also essential for preventing automated weaponizable technologies that make mass-killing trivially easy. Fortunately, there are promising candidates to address the problem; instead of adding security to a system last, they prevent insecurities from the very start.
To make computer security adoptable, we need a mixture of research and entrepreneurship to test it in the real world. The cryptocommerce ecosystem already serves as a test arena, where rogue actors compete to steal cryptocurrencies. It is hostile enough that insecure software dies quickly so that the ecosystem is populated by the survivors. This provides inspiration for building a fully secure software stack from the foundations to the user. Play-tested systems can grow within, co-exist with, and eventually outcompete current insecurable software infrastructure.
What Can We Hope For: Human AI Cooperation
As civilization expands, it could act like a seed crystal dropped into a supersaturated solution, expanding its ordering principles in all directions. There is no law saying that the result will be the continually growing spontaneous-order intelligence of civilization. It could be the outcome of a winner-takes-all arms race to expand first.
If human players upgrade their tools to cooperate with the AI players entering the game, we have much to look forward to. But even if all goes well, the universe will eventually no longer be able to sustain computation of any sort, especially not the complex computation required for intelligence. Even if we create a game which makes everything up to now seem like an insignificant speck, it is all temporary. Nevertheless, insofar as the in-between is shaped by what current players value, it’s on us to shape what happens between now and then.
The game is on and the stakes are high. Let’s play.
Chapter Shortcuts
- Meet the Players: Value Diversity
- Skim the Manual: Intelligent Voluntary Cooperation
- Improve Cooperation: Better Technologies
- Uphold Voluntarism: Physical Defense
- Uphold Voluntarism: Digital Defense
- Increase Intelligence: Welcome AI Players
- Iterate the Game: Racing Where?
Acknowledgements
These are from the earlier book. All remaining errors in this new version are our own: Thanks to the following members of Foresight’s Intelligent Cooperation Group for shaping this book into what it is by giving seminars on major themes covered in this book: Robin Hanson, Balaji S. Srinivasan, Vernon Smith, Tyler Cowen, Audrey Tang, Chris Hibbert, Anthony Aguirre, Martin Koeppelmann, Gnosis, Paul Gebheit, Christine Lemmer-Webber, Kate Sills, Arthur Breitman, Marc Stiegler, Chip Morningstar, Federico Ast, Tyler Golato, Patrick Joyce, Glen Weyl, Alex Tabarrok, Zooko Wilcox, Brewster Kahle, Daniel Ellsberg, David Brin, Gernot Heiser, David Krakauer, Gillian Hadfield, Richard Craib, Peter Norvig, and Anders Sandberg.
Further thanks to David Friedman, Gillian Hadfield, Keith Mansfield, Robin Hanson, David Manheim, Kate Sills, Terry Stanley, Chris Hibbert, Jazear Brooks, Jim Bennett, Micah Zoltu, and Dan Finlay for extensive comments on the book draft. The key ideas of Paretotopia (at the time) were originally worked out by Mark S. Miller in collaboration with Eric Drexler. We hope you, as a reader, will find interest in critiquing and augmenting the ideas. This book, like a good game, is here to be iterated through cooperative play.