On the emergence of history's reins

One general law, leading to the advancement of all organic beings, namely, multiply, vary, let the strongest live and the weakest die.

― Charles Darwin

We are a way for the universe to know itself.

— Carl Sagan

If you don’t know where you are going, you’ll end up someplace else.

— Yogi Berra

Imagine you were trying to explain how the world worked 10 billion years ago. Back then, the best explanation would be in terms of physics: galaxies forming and supernovas producing heavy metals. Ten million years ago, though, you’d talk about evolution: plants, mammals, and early hominids. Ten thousand years ago, when agriculture was being established, you might talk about culture and the spread of ideas.

Each of these forces — physics, evolution, and culture — gave rise to the next, producing more complex and directed phenomena. None of them ever stopped, but the later forces often seem to take over from their predecessors: it doesn’t make so much sense to try to explain dinosaurs in terms of physics. Furthermore, each of these forces is in some sense blind: physics didn't create evolution with any foresight of where it would lead, nor did biological evolution give rise to culture with any aim of getting life into space. They were more like very local processes which somehow stumbled across a pattern which could go further — organic life, and intelligent species, respectively.

This essay is going to attempt to answer two interrelated questions: How do humans fit into this story? And what might come next? 

The short answer is that:

  • Humans are genuinely unlike previous forces in that we are not entirely blind
    • We have the ability to look ahead and intentionally steer the world towards the outcomes we prefer
  • But so far, we’re only really good at small- and medium-scale steering
    • e.g. Planning and constructing buildings; mass vaccination campaigns
    • We mostly don’t try that much to steer on very big scales
      • e.g. What do we want international politics to look like in fifty years? Would it be better or worse if humanity were an interstellar civilization, and can we make that happen?
      • (And we’re often not effective when we try)
  • So the world is still mostly shaped by forces which are not deliberately chosen
    • e.g. International tensions; market dynamics
  • This could change, if/when major actors get good at large-scale long-term steering
    • They could get there by a combination of greater foresight, better coordination, and perhaps just caring more about the big picture than the big actors do today
  • This would be a profound shift for the world
    • Because if the dominant force could steer effectively, it could limit the emergence of any successor forces, to only those it endorsed
    • Such a shift would be unlikely to be reversed — ever
    • This might go very well, or very poorly
  • The rise of artificial intelligence may well precipitate this transition
    • By improving foresight and coordination capabilities — and perhaps by concentrating power
    • So the transition may come sooner than we would otherwise think
    • Moreover, AI systems may end up doing the steering — with or without our consent

Today, we cannot coordinate well enough to fully choose the path before us. But nor are we fully blind. We can see enough to anticipate this transition. And perhaps, if we are wise, we can help to shape it for the better.[1]

The forces shaping history

How can we make sense of changes in which forces shape history, given that they operate at such different levels of abstraction? Physics technically remains 100% predictive even as new forces emerge — so what might we mean when we intuit that these new forces “take over”?

One thing that we can do is ask “if you were explaining things[2], which forces would be predominant in the explanations?”. The best explanations change over time as the operant forces change. This helps us to pick out the simple driving patterns. Physics led to self-replicating molecules, which gave us biological life. Biological evolution was, with the rise of humans, overtaken as the major explanatory force by cultural evolution.[3]

Let’s try to draw a graph of what proportion of our explanations would be about these different factors, over time:

This is a simplified picture, but it actually highlights some interesting patterns that are worth looking into more. The most basic feature here is that new forces sometimes enter the picture. Indeed, humanity has added quite a lot of new forces (often selected by cultural evolution), so more recent history gives a more complex picture, perhaps something like this[4]:

The blind forces

The earliest forces — what we might call astrophysics, chemistry, and geology — are truly and deeply blind. They have nothing resembling intention; no meaningful ability to adapt.

Biological evolution comes closer, and can seem intentional at times. It consistently optimizes for the same outcome — genetic fitness — and it often does so in very sophisticated ways. But there is no true intention. The watchmaker is blind. It proceeds by a simple step-by-step search, sometimes yielding inefficient and fragile solutions because of a fundamental inability to plan ahead.

Evolved minds are the first instance of predictive cognition — with the ability to think ahead, anticipate outcomes, and plan accordingly. And humans at least, via language and abstraction, can reason about and make plans for navigating unprecedented situations. We can analyse novel problems and devise novel solutions.

But even so, early humans were functionally (we must presume) blind to the bigger picture — how their actions fit in the grand sweep of history. For instance, hunter-gatherers literally did not know how big the world around them was. And while there was some local choice-making, early cultural evolution was essentially blind: practices often spread through imitation of what seemed to work, without anyone understanding the mechanisms. People made cheese long before they knew about bacteria or fermentation, adopted effective farming practices without understanding soil chemistry, and followed taboos without knowing their protective functions. The people were not fully blind to the world around them — but they served as the substrate for the algorithm of cultural evolution, without knowing that they did so.

The anthropocene

Humans today possess unprecedented control over the world. The Scientific Revolution has given us a deep understanding of the universe and our place in it, while the Industrial Revolution has dramatically expanded our capacity to achieve physical goals. Humans conceive of grand projects — putting people on the moon! eradicating smallpox! — and then make them reality.

Given this remarkable control, one might naively think the future is simply ours to shape, and that this has been true for generations. But when we look at the modern world — with its nuclear weapons, social media, and climate change — it doesn’t seem especially close to what we imagine the people of the 1870s were hoping for their great-great-grandchildren.

But why is it so ridiculous to think of people having this kind of control? And if not people’s choices, what forces are now shaping our long-term direction?

There are a few key barriers to people choosing between the long-run trajectories of the world:

  1. Limited understanding — we don't fully grasp long-run trajectories or how our actions affect them
    • While earlier humans were essentially completely blind to these effects, even today our understanding remains very limited
  2. Lack of priority — many people's preferences focus on short-term outcomes rather than long-run impacts
    • This leads to emergent forces on long-term outcomes arising organically from the interactions of systems optimized for short-term goals
  3. Coordination challenges — people struggle to work together effectively toward common goals
    • Conflict, politics, market failures, and regulatory capture mean outcomes rarely match straightforward 'aggregate preference satisfaction'

These obstruct our choices from determining the big picture. But at this stage, they’re not barriers of kind, only of degree — humanity has nonzero ability to understand implications on long-term outcomes, nonzero preferences about the long-term, and nonzero coordination ability. We have seen several significant active efforts to shape the world, including:

  • The framing of the US constitution
  • Work to abolish slavery
  • Transcontinental railroad networks
  • The rise of communism
  • The establishment of the United Nations
  • The creation of the internet
  • Genetically modified crops
  • Campaigns to stop climate change

Some of these have seen significant success; and this is in part to the credit of those pursuing them. But only in part. The success or failure of the projects doesn’t seem well explained just by how many people supported or opposed them, or how competently they did so. And all of these efforts have had important unexpected consequences, over a timescale of decades or more.

When humans lack the foresight and coordination to fully steer our trajectory, what else is shaping it? We see, roughly, three categories here (though the boundaries between them are not clean).

First, forces which were chosen by humans but cannot be easily changed (even though they may be operating in unforeseen circumstances), including:

  • Legal frameworks — the law and its underpinnings, which may rely heavily on old frameworks and precedents
  • Political institutions — the codified rules for governing states and other institutions, which in turn affect the incentives for the humans involved in decision-making
  • Corporations — Collective organizations, often with a profit motive, which are incentivized to get people to do certain things

Second, forces which emerge — unchosen — from large-scale human interaction, including:

  • Market forces — small-scale individual preferences, when mediated by markets, can create large-scale outcomes that may or may not align with what people would actively choose
    • e.g. many people's individual preferences to eat meat, combined with market forces, have created factory farming systems that many find morally repugnant
  • Competitive dynamics — arms races, tragedies of the commons, and corporate competition can create pressures which leave us in situations that leave everyone worse off
  • Cultural evolution — ideas and beliefs spread partly based on their ability to replicate rather than conscious choices

Third, natural forces and constraints which still shape our world, including:

  • The need for humans to eat, and have other physical comforts
  • The difficulty of making our way to other planets
  • The fundamentals of the technological landscape, which has e.g. allowed us to invent solar panels but not (yet?) cost-effective fusion reactors or teleportation

These forces interact to produce major effects that no-one chose, not only limiting our choices but changing our perception of what options are even viable.

The ascent of choice

But what if someone (us?) could change this paradigm? Consider deliberate steering — the exertion of effort on behalf of large-scale preferences, in a way farsighted enough to anticipate the dynamics between future forces. 

Unlike its predecessors, this would not be blind to successor forces. Instead, it would actively shape them. 

If a deliberate steering force got enough influence, it might remain high influence forever (absent external intervention[5]). Something like this:

This would be a permanent shift in the paradigm governing new forces. Let’s call it the Choice Transition. In this scenario, deliberate steering doesn't necessarily control everything. The key is that it exerts conscious influence over the emergence and balance of major forces. That means:

  • The deliberate steering force can exert enough control over the emergence of external forces to keep its position of control
  • The deliberate steering force has some understanding of, and control of, the forces emerging out of the steering force itself

Like the Industrial Revolution, the Choice Transition isn’t a single crisp moment, but a process which shifts the course of history. Right now, deliberate steering has some influence over the direction of the future. But it’s not robust enough to guarantee that people’s deliberate choices will determine the future. Perhaps it will turn out to be effective — perhaps these early attempts to steer will lead to more influence, and more competence, for those flavours of steering — until eventually it is predominant. In this case, we might say in retrospect that people today were in the early phases of the Choice Transition. But perhaps not.

Is a Choice Transition inevitable? 

We can expect a Choice Transition to occur if an agent, or coalition of agents, with sufficient power meets three criteria:

  1. They are farsighted enough to understand and shape the emergence of new forces
  2. They significantly care about long-term consequences
  3. They can coordinate to implement things they think would be good

Right now at the global scale the world falls far short on all three fronts. But there are forces which may push all of these up:

  • The development of technology, especially AI, could dramatically improve our foresight and coordination capabilities
  • Actors caring about long-term consequences are disproportionately likely to invest resources in shaping future actors’ preferences
  • Human job losses to AI could hollow out the middle classes and lead to more concentration of power, which would make coordination more straightforward 

Our best guess is that, at some point and for some agent(s), such a transition is very likely. However, it’s conceivable that foresight and coordination capabilities might never catch up with increasing world complexity. It's also possible that a Choice Transition might be deliberately avoided, given its potentially alarming implications.[6] But avoiding it would require a degree of deliberate steering in itself — a delicate balancing act.

What the Choice Transition is not

To help to pinpoint the concept we have in mind, we’ll explain some things that the Choice Transition doesn’t have to involve (although for some of them it’s possible that it could). There are many possible thresholds for our civilization to cross, and this is just one of them. Still — we think that the Choice Transition would represent a very special shift in the sweep of macrohistory, moving for the first time into a regime where the forces shaping the world have been deliberately chosen.

It doesn’t mean omnipotence

A Choice Transition implies the presence of forces which can steer the emergence of new forces. But this is a very specific sort of control. In principle, it might have been achievable by a robust enough steering ideology even in a pre-industrial civilization, able to understand and steer the people involved, even while the civilization was in many ways still at the mercy of aspects of the natural world.

Realistically, we’re imagining the Choice Transition happens in a society somewhat more advanced than our own. But they may still have plenty of things they cannot do.

It needn’t mean a single chooser

The Choice Transition needn’t imply a single chooser (though it might). The world would have undergone a Choice Transition if many people with diverse preferences were good enough at anticipating problems (such as new social dynamics that could be disruptive) and capable of collectively choosing to coordinate to avoid them — even if most decisions were made individually, not collectively. In this world, the different factions would still have competing preferences, but would presumably be far more capable of avoiding deadweight loss in their disagreements. At minimum, they would be capable of avoiding the kind of coordination failure which leads to important new forces pushing things in directions that nobody wants.

As one special case, a vision of a liberal democracy — with a sufficiently informed/enlightened electorate — seems compatible with a post–Choice-Transition world.

It needn’t mean the end of new forces

The Choice Transition could still leave room for the emergence of new forces — it’s just that these would be understood and consciously chosen/accepted before they had large influence.[7]

For an example of how new forces could emerge in a deliberate way, let us suppose that the steering entities embark on a serious reflective process. In this case, good descriptions about what’s happening in the world might start making reference to the internals of their reflective processes — e.g. something like “the rise of a new theory of population ethics, because of a clever rebuttal to the repugnant conclusion” might itself become one of the forces shaping history[8]. This would be an example of new forces operating at a higher level of abstraction — the new forces would in some sense be built “on top of” deliberate steering (in a similar way that everything else is built on top of physics).

AI and the Choice Transition

We see six ways that AI may matter for the Choice Transition:

  1. Better foresight capabilities and understanding could facilitate effective steering
  2. Better coordination capabilities could allow for more coherent steering
  3. Agentic AI systems could be (among) the entities steering
  4. Automation of labour could centralize power
  5. AI might empower forces that squeeze out deliberate steering
  6. Automation of research means all of this might happen quickly

Let’s consider these in turn.

1) Better foresight capabilities and understanding enable steering

Steering is often bottlenecked by people simply not understanding how their actions affect the future. Smarter AI systems, turned towards this, could facilitate deeper (and more widespread) understanding. This could help people better understand how the future might go, and also help them to find effective plans in service of long-term goals.

There probably isn’t a single threshold here that enables a Choice Transition; instead, it will depend on other factors like degree of coordination.

2) Better coordination capabilities could allow for more coherent steering

AI could improve coordination[9] in a few different ways:

  1. Increasing the coherence of group agents
    • It can be hard to execute complex plans spread across many people because everyone needs to understand the plan
      • AI could give a mechanism for crystallizing the understanding of the plan in a deep and responsive (rather than shallow/static) way, so it can be accessed by many more people in parallel
    • AI could monitor local decision-making (without creating dystopian privacy issues), reducing the incidence of decisions made because of local incentives that don’t chain to global objectives
  2. Improving the effectiveness of existing coordination mechanisms
    • People are pretty good at coordinating to find mutually-beneficial outcomes when they take time to talk — but time is expensive
      • AI assistants acting as proxies for people or orgs could allow for vastly more bilateral (or multilateral) negotiation, just bringing hammered-out agreements to their principals for ratification
    • Sometimes negotiation is hampered because one (or both) sides don’t want to reveal confidential information that affects the fair bargains
      • AI could enable negotiation between informed artificial agents which are spun up solely for this purpose and deleted afterwards, so that the confidential information never leaks
  3. Unlocking new coordination solutions
    • People could create and jointly empower new AI systems to enact agreements
      • Where previously lack of commitment mechanisms or high friction of invoking commitment mechanisms — e.g. courts — could have prevented agreement
    • AI inspectors could get high levels of access without leaking secrets, so allow commitments to transparency on dimensions that matter
    • If AI agents are in the driving seat (see next section), they may be naturally more coordinated than the human organizations they displace

3) Agentic AI systems could be (among) the entities steering

Right now, what steering there is is done by humans or groups of humans.

AI could change this. AI agents (accidentally or deliberately created) could end up in control of some/all of the future. Indeed, in the classic misalignment risk stories such AI agents also expropriate power — resulting in none of the future being under meaningful human control.

As well as “pure AI agents”, it is plausible that we might have blended agents, who take some of their agency from humans and some from AI systems. Some possible such blended agents might best be regarded as “augmented humans”, with the AI just improving their capabilities. But others might be more complex — e.g. perhaps a corporation or government combining AI services for planning and humans to make some of the judgements would better be regarded as a new kind of steering entity.

4) Advances in AI could lead to centralization of power

We see four reasons that AI may lead (or contribute) to centralization of power:

  1. One of the forces pushing towards democracy is economic — when workers have less economic leverage (which may happen due to mass automation), elites have less incentive to maintain democratic institutions and share political power, since they no longer need workers' cooperation/consent to the same degree in order to generate wealth
  2. Especially if takeoff is quite rapid, we might see a major rebalance of power towards the lead project — in the limit, perhaps giving them a decisive strategic advantage compared to the rest of the world
  3. Advanced AI capabilities could help a single AI system or small group of humans to effectively micromanage a large domain, without relying on deputies they cannot fully trust
  4. Advances in preference elicitation and aggregation, and democratic accountability, could mean that people are happier entrusting leadership in democratic systems with much larger degrees of power, as they are confident that it will properly account for their wishes

A Choice Transition driven by a system with centralized power relies on that centralized power being foresighted enough and having enough internal coherence and fine-grained control to steer effectively.

In contrast, a Choice Transition driven by a system with decentralized power may face additional hurdles (though likely not insurmountable ones):

  • Coherence may be more difficult, as it may require coordination between actors with varied preferences
  • Foresight may be more difficult, as it may require anticipating multipolar dynamics and emergent forces

5) AI might empower forces that squeeze out deliberate steering

Although AI could improve capacity for foresight and coordination to steer (points 1 & 2 above), it’s conceivable that it could also leave less room for deliberate steering. If AI systems become sufficiently capable at optimizing for specific local objectives, we might see major increases in their use. That could, in turn, lead to the rise of forces emerging from competition and other interactions between the hyper-optimized AI systems (analogous to the unchosen forces emerging from human interactions).[10]

Consider current competitive domains like markets, politics, and the spread of ideas. Although there is a selection pressure towards efficiency, humanity is currently very far from the frontier, and so the most successful entities can have features which are not purely optimized for efficiency. A company, for example, can still succeed financially while furthering the values of their owners and employees, partly because its competitors cannot trivially scale to compete with it, and are constrained by the human consciences of their own owners and employees. 

But AI-led competitors might lack these constraints, and so AI businesses might set the stage for much more aggressive selection. The result could be an environment where competitive pressures make it much harder for any system to maintain power directed at things other than efficiency and growth. This in turn could make it harder for forces to retain influence while deliberately steering towards broader values.

More generally, technological progress from AI could change the existing dynamics, and lead to new forces, or rebalancing of power between existing forces. This has the potential to change or delay a subsequent Choice Transition.

Could this forestall a Choice Transition altogether? Perhaps not — these hyper-optimized forces would, we tend to imagine, operate on behalf of some other (less optimized) actors, who could eventually use their understanding of the broader picture to forge agreements which enact a Choice Transition. However, if too many of these forces escaped meaningful oversight — or began to optimize aggressively against oversight — perhaps it could. At minimum, that scenario might alter the distribution of power in the world leading up to a Choice Transition.

6) Automation of research means it might all happen quickly

We’re used to having some time to feel out new regimes and work out how to adapt to them. Automation of research, and in particular automation of AI research, could accelerate the pace of change, potentially by a lot. Since this might drive changes — such as (1)–(4) just discussed — which facilitate a Choice Transition, there’s a real possibility that the world faces down the transition at a time when everything is moving extremely quickly. This means:

  • The Choice Transition might happen earlier than we would otherwise guess — potentially, shortly after the development of transformative AI
  • The Choice Transition might unfold quite rapidly — moving from a state in which nobody is close to meaningful steering power over the emergence of new forces, to one in which someone has highly effective steering power, without spending long at intermediate levels
  • Rapid technological progress might affect many other things in the world — potentially meaning that some actors complete a Choice Transition even while others are more bewildered than ever by large changes they have not had the time to fully adjust to

The space of possible Choice Transitions

There are many, many different ways that some “deliberate steering” force could come to prominence! Here are a few salient dimensions the possibilities vary on:

Who ends up steering?

  • An effective democratic world republic?
  • A single state turned hegemon?
  • An immortal dictator?
  • A single AI overlord?
  • A coalition of many disparate AI systems?
  • A broad coalition of humans (aided in coordination and action by superintelligent AI assistants)?
  • An overarching ideology?[11]

How do they come to be steering?

  • Bringing most of the world with them so the future is collectively chosen?
  • Working within the system to amass overwhelming resources and control?
  • Expropriating power from the rest of the world?

What do they value?

  • In principle pretty much anything is possible!
  • Salient variables that may or may not be present include:
    • Human welfare
    • Animal welfare
    • Preferences of various kinds of institutions and AI systems
    • The abstract good
    • Respect for tradition
    • Truth-seeking and reflection
    • Autonomy/liberty/dignity of other agents

Some of these possibilities, of course, seem much better than others. And the real differences between them may be far bigger than they initially appear — since, by definition, this force could end up steering across the entirety of our future … even as humanity, or our successors, may spread out so far through the cosmos as to make the Milky Way look tiny, across such a period as to make the history-to-date of multicellular life look brief.

Acknowledgements

In memory of Sebastian Lodemann, who was an organizer of a 2022 residency on AI futures at which these ideas were first developed. In addition to Sebastian, Owen would like to thank other participants at the residency, and several people for discussions after he shared some slides in summer 2023. Since Raymond joined the writing team, we would like to thank Jonas Vollmer, Tom Davidson, Rudolf Laine, Josh Jacobson, and especially Adam Bales, Max Dalton, and Rose Hadshar for helpful comments on our drafts, leading to deeper exploration of the ideas.

Appendix: comparison to existing frames

Comparison to normal AI x-risk frames

We are agreeing with a lot in the traditional framing of AI x-risk:

  • AI could be game-changing, especially via automated research
  • Control over the future is at stake
  • AI agents are concerning since they might expropriate power → alignment work is very important
  • If we had effective world government then high levels of caution around AI development would be the obvious choice, but it’s a bit less obvious how to proceed in a highly competitive world

On the other hand we have some differences in emphasis:

  • This entire lens is more zoomed out — the Choice Transition could still be relevant in a world without transformative AI (although the prospect of transformative AI makes the Choice Transition more pressing)
  • We’re largely being descriptive, not trying to say what’s good for people (although this does inform our thoughts on that and we may write more about it in the future)
  • We less focused specifically on (identifying and averting) bad outcomes, and more on overall trajectories
    • AI risk is fundamentally a transition risk rather than a state risk, so we kind of think the question of “what are we trying to do?” should have prominence over “what are we trying to avoid?”
  • We think it’s likely that AI will have transformative effects before we are at risk of having the future expropriated
    • We’re not 100% on this, and do see early scheming risk as deserving some attention, but it appears to us that most of the risks from AI come midway through an intelligence explosion
    • We therefore think that the prospect of reaching a human-led Choice Transition before having to face the hardest parts of alignment is a promising target
    • We think that some of the best outcomes might come via establishing a broad cooperative coalition that is able to effect a Choice Transition before any single actors could seize control
      • We think this probably isn’t feasible today, but may become feasible before the critical moment
    • All of the above makes us relatively more positive on the value of developing AI capabilities that help the epistemics of individual people or organizations, and capabilities that help facilitate coordination — i.e. categories 1 and 2 of the above discussion of AI and the Choice Transition

Bostrom's notion of a Singleton

This is closely related, but a society could have undergone a Choice Transition without solving all its internal coordination problems; vice-versa a singleton need not have preferences about long-term outcomes (hence it’s more plausible that it slowly relinquishes control).

Yudkowsky's notion of a Pivotal Act 

We think this is not quite an act which effects the Choice Transition, but any Choice Transition would presumably have actions or processes which were, ex post, pivotal.

Finnveden, Riedel, and Shulman’s notion of lock-in

There are various kinds of lock-in that could happen without a Choice Transition; however, value lock-in essentially requires a Choice Transition. Vice-versa, an effective Choice Transition seems liable to lead at some point to value lock-in (although potentially this might be value lock-in following a long reflection).

MacAskill and Ord's notion of the Long Reflection

The Long Reflection is a natural thing to do shortly after the Choice Transition, and where the idea of the the Choice Transition is value-neutral, the idea of a Long Reflection is normative, telling us that we should go through a Choice Transition and moreover starting to sketch some of the properties that would make for a good one.

Drexler’s notion of Paretotopia

This is a highly compatible concept; we think the Paretotopian nature of accessible futures could, if widely appreciated, make a cooperative Choice Transition more likely.

Carlsmith's notion of yang

In his essays on Otherness and control in the age of AGI, one of the central themes is "yang" -- projecting will out into the world. The Choice Transition corresponds to the empowerment of yang over yin on the grandest scale (determining which forces will shape the universe); and so the parts of those essays exploring how yang can go wrong are very relevant for the normative questions of what kinds of Choice Transition would be desirable.

Buterin’s notion of d/acc

Many of the strategies that we feel good about in aiming for good versions of the Choice Transition could fit under a “d/acc” label. But d/acc is fundamentally about strategies, whereas the notion of the Choice Transition is fundamentally about orienting to a largescale feature of the world.

Alexander’s notion of Moloch

Alexander doesn’t give a precise definition of Moloch, but it appears to represent the emergent forces which come from many people locally pursuing things they want, and without good coordination mechanisms. These are forces which, although they arise from human action, are not chosen by any humans. So the Choice Transition roughly corresponds to the terminal decline of Moloch.

  1. ^

    This piece will largely aim to describe the choice transition rather than make claims about how it ought to go or what we ought to do. This is largely because we don't want to muddy this initial analysis too much with value judgements or fine-grained empirical claims. Nonetheless, we encourage readers to consider these questions (and we ourselves hope to return to them in future work).

  2. ^

    Which things? We’re especially interested in explanations in the style of big history — that get at the complex or autopoietic patterns in the world which seem to be driving the creation of further complexity.

  3. ^

    Why was it overtaken? At least in this case, it seems like a big part of it is that cultural evolution could operate on faster timescales than biological evolution.

  4. ^

    Here some forces, like “market forces” or “science and technology”, feel like they’re reasonable analogues of the earlier forms of evolution. Other lines on the graph, like “ideologies” and “institutions”, are perhaps better thought of as aggregates of many smaller forces (one for each ideology or institution).

  5. ^

    e.g. alien invasion; false vacuum collapse; divine intervention; simulator shutdown.

  6. ^

    See e.g. C.S. Lewis’s essay The Abolition of Man, in which he expresses alarm that something like a Choice Transition will permit modernizers to eliminate a lot of what is important about humanity. Here is a quote:

    “Each generation exercises power over its successors: and each, in so far as it modifies the environment bequeathed to it and rebels against tradition, resists and limits the power of its predecessors. This modifies the picture which is sometimes painted of a progressive emancipation from tradition and a progressive control of natural processes resulting in a continual increase of human power. In reality, of course, if any one age really attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power. They are weaker, not stronger: for though we may have put wonderful machines in their hands we have pre-ordained how they are to use them.

  7. ^

    In principle the steering entities could also choose to relinquish control altogether, in whole or in part. In practice this seems perhaps unlikely, for the same reason Omohundro’s basic AI drives are essentially about power-seeking. But if a lack of control was somehow important for their fundamental values (or revealed upon reflection to be so), it is certainly conceivable.

  8. ^

    Of course, such descriptions may already have some explanatory power in our world today. The point is not that this is an unprecedented new class of forces, but that this class could remain a source of new forces after the choice transition.

  9. ^

    These applications are sometimes studied under the label “Cooperative AI”.

  10. ^

    We owe this point to Rudolf Laine.

  11. ^

    We earlier listed "ideologies" as a different type of force than deliberate steering. Why then does it also appear on this list? Historically, ideologies have acted in a way that may encode preferences, but is not farsighted enough to deliberately steer. But if actors in general become more farsighted, and better at steering, then those acting on behalf of an ideology may be able to put that ideology firmly in the driving seat — and even though it has no cognition of its own, to ensure that only actors who will robustly follow its principles will be empowered to make crucial decisions.

New Comment
4 comments, sorted by Click to highlight new comments since:

There is a much older incarnation of this idea: "The Conditioners" as invisioned by C.S. Lewis in The Abolition of Man (1943):  

The final stage will have come when “humanity” has obtained full control over itself. “Human nature will be the last part of Nature to surrender to Man.” The ruling minority will have become a caste of Conditioners, people “who really can cut out posterity in what shape they please.” From this moment onward, the human conscience will work the way humans want it to work – that is, the way wanted by the Conditioners.

The writing here was definitely influenced by Lewis (we quote TAoM in footnote 6), although I think the Choice Transition is broader and less categorically negative. 

For instance in Lewis's criticism of the potential abolition he writes things like:

The old dealt with its pupils as grown birds deal with young birds when they teach them to fly; the new deals with them more as the poultry-keeper deals with young birds— making them thus or thus for purposes of which the birds know nothing. In a word, the old was a kind of propagation—men transmitting manhood to men; the new is merely propaganda.

The Choice Transition as we're describing it is consistent with either of these approaches. There needn't be any ruling minority, nor do we assume humans can perfectly control future humans, just that they (or any other dominant power) can appropriately steer emergent inter-human dynamics (if there are still humans).

How would you compare your ideas here to Asimov's fictional science of psychohistory? I ask because while reading this post I kept getting flashbacks to Foundation.

It's been a long time since I read those books, but if I'm remembering roughly right: Asimov seems to describe a world where choice is in a finely balanced equilibrium with other forces (I'm inclined to think: implausibly so -- if it could manage this level of control at great distances in time, one would think that it could manage to exert more effective control over things at somewhat less distance).