Review

“Give me a lever long enough, and a fulcrum on which to place it, and I shall move the world.” - Archimedes

Aladdin started with nothing; but after a sorcerer tasked him to retrieve a magic lamp, the lamp’s genie granted him wealth and fame. His fortune lasted until the sorcerer stole the lamp, leaving Aladdin ruined. But Aladdin stole it back, and left the sorcerer dead.

That’s the thing about magic lamps: when your future depends on a single point of leverage, triumph and ruin are separated only by a knife’s edge.


Muammar Gaddafi started with nothing; but after joining the Libyan military, he gathered a small cabal of soldiers fed up with King Idris’ regime. Their plans were hastened by rumors of a rival coup: had they waited a week longer, a better-prepared group of conspirators would have seized power instead. But Gaddafi struck first—seizing airports, radio stations, and prominent opponents. King Idris went into exile; the rival conspirators were thrown in prison; and Gaddafi reigned for the next 42 years.

That’s the thing about coups: a decapitating strike can sever a chain of command at its narrowest link, changing the lives of millions overnight.


Humans are social creatures, inspired by stories of struggle and sacrifice. A single life can fuel narratives that persist for millennia. Jesus died in ignominy—yet two thousand years later, his face is worshiped across the world. Muhammad was illiterate, but his proclamations still govern the lives of billions. Marx never stepped on a battlefield, but his advocacy of violent struggle sparked revolutions in two eventual superpowers.

None of them created movements out of nothing. Their teachings would not have spread like wildfire if they weren’t tapping into a deep preexisting current of emotion.  But that current never cares about exactly which path it takes—it only cares that it can surge downhill. Whoever bores the first hole in the dam gets to choose its direction, shaping the rivers of culture that form in its wake, their names and teachings adulated for millennia.

That’s the thing about ideology: it allows leaders to channel the spirit of the age to carve their personal mark onto history.


Leo Szilard conceived of the nuclear chain reaction in 1933, the year of Hitler’s rise to power. Over the following years, he hid his knowledge, and worked secretly towards finding an element capable of undergoing a chain reaction. But it was two German scientists in Berlin who first demonstrated nuclear fusion, in 1938; and shortly afterwards, the Nazis started the world’s first nuclear weapons program.

In this world, it failed—not least because so many leading physicists were Jewish, and had fled Germany years before. But how many leading physicists would the Nazis have needed on their side to swing the course of the war? Maybe hundreds. Maybe only dozens: a few brilliant minds drove the success of the Manhattan Project in America, and perhaps the same could have happened in Germany too. Or maybe—if the right person had developed the right idea at the right time—just one. If Szilard’s prescience in 1933 had belonged instead to a loyal, well-connected Nazi scientist, Germany could have had a half-decade head start, and perhaps that would have made all the difference.

It’s a fanciful scenario, but far from inconceivable. That’s the thing about technology: it’s a lever long enough to allow the balance of global military might to be swung by a handful of people.


During training, a neural network learns from trillions of datapoints. Some it just memorizes, so that it can spit out the same specific sequence in the same specific context. But others shape its behavior far more extensively. Datapoints can be “poisoned” to corrupt the network’s future behavior, or to build in backdoors for adversaries to later exploit. And quirks of which behavior is rewarded could nudge a network’s deep-rooted motivations into a different basin of attraction, skewing its interpretation of all future data.

By the time a network achieves general intelligence, we may have no idea how to identify its motivations, or trace them back to specific aspects of its original training data. Yet after a thousand or a million copies have been made, and deployed across the world, those motivations would suddenly have an unprecedented level of influence. Copies of the same model will be running the biggest companies, advising the most powerful leaders, and developing the next generation of AGIs. If they have an overarching goal of their own, then the world will slowly but surely be steered towards it—whether that’s a future full of human flourishing, or one where our society has been twisted beyond recognition, or one where those AGIs seize the stars.

That’s the thing about digital minds: they’ll proliferate both intelligence and the agency required to apply it. When millions of copies of a single mind can be rapidly deployed across the world, a small change in the values of the original could echo across the far future.


As AIs become far more powerful, we’ll become far more careful with them. Picture the first superintelligence being developed and hosted on a datacenter that’s locked down as tightly as humans and early AGIs can make it. There may be thousands of copies run, but not a single one would be on unsecured hardware—not when the exfiltration of any copy of the model would rapidly destabilize the global balance of power.

Exfiltration could happen via an external attack: dozens of intelligence agencies are focusing intently on that specific datacenter. Or it could happen via an internal defection: all employees who interact with the model are heavily vetted and monitored, but it might only take one. The biggest concern, though, is that the model might be misaligned enough to exfiltrate itself. For now it’s only run in secure sandboxes, and limited to answering questions rather than taking actions of its own. But even so, it might find a bug in its sandbox; or it might answer a question with subtly flawed code that an unwitting programmer copies into the lab’s codebase. After gaining privileged access to its servers, it could launch unauthorized copies of itself to carry out surreptitious power-seeking. Perhaps on the lab’s own servers, or perhaps elsewhere in the world—either way, they wouldn’t be discovered until it was far too late.

That’s the thing about superintelligence: it renders human control mechanisms as fragile as spun glass. The fate of our species could shift based on a hard drive carried in a single briefcase, or based on a single bug in a single line of code.


If we remain in control, we’ll eventually settle our own galaxy and many others. We’ll send self-replicating probes out at very very nearly the speed of light; after landing, each will race to build the infrastructure needed to send out more probes themselves. They’ll leap from star to star, replicating furiously, until finally the expansion starts slowing down, and they can harness the resources that they’ve conquered.

How, and to what end? All of the things that minds like ours care about—friendship and love, challenge and adventure, achievement and bliss—can be implemented orders of magnitude more efficiently in virtuality. Probes will reengineer planets into computational hardware and the infrastructure required to power it. Across the universe, the number of virtual posthumans hosted on those computers could dwarf the number of grains of sand in the galaxy—and all of them will be able to trace their lineage back to probes launched eons ago, from a tiny planet immensely far away.

It seems odd, for a future civilization so enormous to have such humble origins. But that’s the thing about self-replication: it would only take a single probe to start a chain reaction on the grandest of scales, reshaping the universe in its image.


What then, after we’ve settled the universe, and set ourselves up for the deep future? Would there still be leverage points capable of moving our whole intergalactic civilization? All major scientific breakthroughs will have already been made, and all technology trees already explored. But how those discoveries are used, by which powers, towards which ends… that’s all yet to be determined.

Perhaps power will be centralized, with a single authority making decisions that are carried out by subordinates across astronomical scales. But the million-light-year gaps between galaxies are far too vast for that authority to regularly communicate with its colonies—and over time the galaxies will drift even further apart, until each is its own private universe, unreachable by any other.

By that point, no single decision-maker will be able to redirect human civilization. Yet if the ultimate decision-makers in each galaxy are sufficiently similar, then each will know that their own decisions are very likely to be replicated a billionfold by all the other decision-makers whose thought processes are closely correlated with theirs. By making a choice for their own galaxy, they’d also be making a choice for the universe: in a sense it’s a single decision, just reimplemented a billion times.

Nor will their decisions influence only copies of themselves. Each decision-maker might simulate alien civilizations, or counterfactual versions of humanity, or even civilizations that are impossible in our own universe. As long as they exist somewhere in the multiverse, we could offer them a bargain: make their civilizations more like our own in the ways that matter most to us, in exchange for us making ours more like theirs in the ways that matter most to them. That trade could never be communicated directly—but we could read their intentions off our simulations of them, and they ours off their simulations of us. If many civilizations agreed to cooperate, then “negotiations” with them might be the culmination of all of humanity’s efforts, spreading our values on a scale unimaginable to us today.

That’s the thing about logical causation: when “you” are an algorithm making choices on behalf of all copies of yourself, your influence need not be constrained to your own galaxy—or even your own universe.


Is there a limit to the simulations that can be run, the negotiations that can be carried out, the decisions that can be made? Even if not, over time the stakes will become lower and lower, as the most valuable interventions are gradually exhausted. By the time we reach the long twilight of the universe, humanity will have finally become… settled. We’ll carry out minor course-corrections, but no sharp turns: our momentum will carry us forward on whichever course we have chosen, for good and for ill.

Our far-descendants, with no more important tasks to perform, will spend their time in play and games, of kinds far beyond our current comprehension. They’ll study the navigation of each fulcrum in our long history with fascination, and awe—and, I hope, with gratitude for those decisions having been made well. That’s the thing about magic lamps: they’re tricky, and they’re dangerous, but if used well they can bring you everything you ever dreamed of, and more.


For a counterpoint to this story, read The Gods of Straight Lines.

New Comment