I've been writing an ongoing series of Substack posts with a "Big History" view of cooperation - and I was encouraged to post at least some of the posts here. We left off the prior post with a claim that anarchy might be possible as a vision for utopia, even if it’s not achievable. That brings us to a more general version of the same question, following which I'll discuss cognitive inflexibility, historical path dependence, inevitability, inadequate equilibria, and paretotropism. I hope you enjoy!
Is Utopia Possible?
There are two different questions which could be meant when asking if Utopia is possible. The first question is whether there exists a state that the overwhelming supermajority of humans today would agree is tremendously better than the world today. The second question is whether such a state is achievable starting from the present while requiring neither infeasible levels of sacrifice by large groups, nor an interim state in which the world is significantly worse than today. For today’s post, I’ll try to explain why I think the answer to the first is obviously yes, then claim the answer to the second is less certain, but plausibly yes as well.
If we’re not restricting the type of improvement, isn’t the answer to “can things improve” obviously yes? We have some states of the world that are better than others, and moving in that direction should improve things. This fails for two reasons, namely nonlinearity, and Pareto incomparability - but both are usually solvable.
Nonlinearity is a problem because even if there exists some state of the world that is better, we may be close to a local maximum, and getting to the better state involves making things worse. Imagine we’re in the space of possible games, and we are playing hearts, but want to switch to playing poker. Obviously, if we can switch games all at once, it’s easy - but there’s not obviously a single rule we can change that improves the game. And this, at scale, is the reason that the norms changes which Clayborne wanted in the last post aren’t tractable.
Pareto incomparability is a different type of problem, involving the number of people with different preferences. As we explained in the past, the heterogeneity of preferences is actually a critical enabler for cooperation, but requiring strict Pareto improvements is too strict to allow lots of overall improvements. And certainly not all changes are improvements, but all improvements are necessarily changes - and in large systems, changes have lots of effects. So even if there are much better overall situations, there will almost inevitably be some people who lose out.
So there are two clear mathematical obstacles to finding changes - which may seem abstract. But looking at them together, we find a host of real-world issues. For example, most generally beneficial changes are opposed by small concentrated groups who would lose out. For example, it’s why we have a broken tax system in the US. Intuit, the makers of Turbotax, wants the tax code to stay complex so they can make money charging people to file their taxes. Any change in the tax code which simplifies it greatly would help the public, but it’s not a Pareto improvement.
Inflexibility
When I started my first job, there were copies of a book, “Who Moved My Cheese” - an extended analogy and morality tale about mice who were upset on day to find that the cheese in their maze had been moved. They faced the choice of complaining about the moved cheese, acting as it if never moved, or adapting to reality. It was the source of mockery, since the point being made was so blindingly obvious - but over the couple of years I worked in the investment bank, I realized why senior management had bought them.
Even as several colleagues and managers made fun of the book, they failed to absorb the lesson. Legacy systems were maintained and run because people refused to use the replacements, processes that could be automated were done manually, and “the way we usually do things” was often used as a complete and sufficient explanation. The consequent inflexibility was maddening - no wonder management wanted to change things.
Of course, it’s possible that this is inevitable; no matter what system is in place, people end up adapting to it. And this isn’t even necessarily limited to people - we should expect that systems in general evolve towards stability in ways that prejudice them towards repeating rather than changing. Homeostasis is a central achievement of biology, and even of ecological systems. Repetition allows predictability and stability, which allows planning on an individual level, profit taking on a corporate level, and expansion and further changes on a species or system level. Which underscores that while seeking homeostasis is understandable and useful, it’s not the best long-term strategy.
The challenges of inflexibility highlight how systems and individuals often resist change, even when it’s beneficial. However, this resistance isn’t just a matter of personal habits or institutional inertia—it’s also about how past decisions shape the present. And given our framing, of trying to understand why large-scale shifts toward utopia may be difficult, we need to examine the concept of path dependence, where historical choices lock us into trajectories that are hard to escape. This idea partly explains why change is resisted, and sets the stage for discussing how contingent events can have outsized impacts on our systems, before eventually returning to our topic.
Path Dependence
The example usually used to discuss path dependence is the Qwerty keyboard; the keyboard layout that we use is poorly suited to the task of optimizing input. The essential reason is that there is a feedback cycle, and whichever keyboard layout gains adoption at some past point builds a userbase that creates a self-reinforcing cycle where people who know how to type prefer the keyboard they know, so that new users end up with the same layout.
One way to think about this is chaos theory. Polya’s urn provides a motivating example. You have an urn (a large container) with two types of balls in it, red and blue. You start with 1 red ball and 1 blue ball. You then draw a ball randomly, and if you draw a red ball, you put the red ball back in the urn, but you also add an extra red ball to the urn. Similarly, if you draw a blue ball, you put the blue ball back in, but you add an extra blue ball to the urn. You repeat this process over and over. Each time, you increase the proportion of the color of the ball you just drew.
At first, the system might seem balanced. You start with 50% red and 50% blue. But because the system is path-dependent (meaning the outcome depends on the sequence of draws), small differences at the beginning can snowball into much larger effects later on. If you happen to draw a red ball first, now the urn has 2 red balls and 1 blue ball. This increases the chance of drawing red again. If you draw red again - which is more likely than not - you now have 3 red balls and 1 blue ball. And this feedback loop can quickly result in a "lock-in" effect, where one color dominates. On the other hand, if you had randomly drawn blue at the start, the urn's proportions would favor blue, possibly leading to a completely different outcome.
The analogy to the keyboard layout is clear - but historical path dependence argues more than this. In historical discussions, path dependence claims that contingent factors can dictate the path taken, so that where we are now is a function of the past. That doesn’t mean that having arrived at the same point, the path matters, it instead means that history is chaotic, and even if we’re stuck with the result, it was not inevitable.
Utopian Path Dependence?
Altuğ Yalçintaş has a paper entitled “Historical Small Events and the Eclipse of Utopia: Perspectives on Path Dependence in Human Thought,” in which he quotes James Gleick and argues that history has a chaotic dependence on small events. To get there, he uses a variety of what are in my view unconvincing examples. “Path dependence is used in order to communicate a complaint about the historical condition of such institutions as VHS video systems or [Qwerty] keyboards.” The entire idea, he says, is “textured product of imagination aiming at telling the reader the undesirability of the historical condition that leads social institutions to underachieve.”
But the main discussion of the paper is leading up to a discussion of Thomas More’s Utopia. And the point being made was that “the metaphor of Utopia had implied that it was possible to discover the ideal thinking systems… The book inspired changing the world with a faith in human perfection.” I will refrain from an extended diatribe, but should mention at the very least that books urging humanity to build a perfect world should start with the Bible; More’s Utopia was a late addition to a long history of books urging such perfection.
Despite the overwhelmingly overdetermined fact of human idealization of the future, Yalçintaş claimed that More’s Utopia was exactly the class of historical aberration which causes underachievement. That is, the idea of utopia was a random event that had an outsized impact. And foreshadowing the anarchic heterotopia mentioned in the previous post which was discussed by Clayborne, he concluded that “the more we are free from this utopian ideal, the more likely we are to free ourselves.”
But to disagree with the example, there is almost nothing more overdetermined than human striving for perfection and utopia, and chaotic systems are not sufficient to refute or change this. And to address why, we should address a misunderstanding about what chaotic systems and path dependence proves. Specifically, while some systems are chaotic, it takes a misunderstanding to view chaos as evidence that the entirety of the system is contingent. Weather, for example, is the chaotic system par excellence - but it’s eminently predictable over the long term, as climate. Local rainfall on a given day is a product of chaotic factors, the average rainfall in a region is remarkably steady. I can’t tell you the temperature on a given day 5 years from now, but I can safely predict, being December, that New York will be cold, and Australia will be warm.
An example from Yalçintaş’s essay riffs on Edmund Wilson’s “To the Finland Station.” Wilson portrayed Communism as an inevitable culmination of history. Yalçintaş, however, suggests that if Lenin’s brother had not been executed by the Tsar, and then he had not started reading Marx by chance in university, perhaps he never would have arrived at Finland Station early on in the Russian revolution. But even with real chaos, there is often a larger-scale predictability about these complex systems. Not in the details, but in the broad strokes. I don’t think that Yalçintaş would argue that the various communist revolutions couldn’t have occurred without Lenin, but their path could certainly have been changed.
Historical happenstance has an impact - but I’d say that the overall trend driven by economic and social forces are likely to be unchanged. That is, some form of revolution in states with widening wealth gaps would have been likely, even if the details of the revolutions would likely have differed if Lenin had never championed the Communist cause. Similarly, the revolutions might have led to less disastrous economic outcomes if Marx had not formulated his labor theory of value.
Equilibria Are Not Inevitable (Neither are Cars)
The claim above, that the overall trend driven by economic and social forces are inevitable, is a limited one. That is, as the suggestion about the different possible paths for communism suggest, even where economic forces are similar, and technological capabilities are the same, very different outcomes are possible.
To take a less speculative example, we can compare modes of transport; different cities have very different balances of public and private transit. In many cities, especially in the United States, the notion of voluntarily being carless and relying entirely on public transit is laughable. But many New Yorkers feel the opposite - owning a car is a burden. And the same is true in many large, dense cities internationally. There are many cities where a bicycle is more convenient than a car, and where this happened is partly historical accident, but also partly a design choice.
Small changes can create immense shifts in the eventual outcome - the dismantling of electric trams during the great depression may not have been a grand conspiracy. Cities which did not enforce a right-of-way for trams had cars make them subject to the same traffic, eliminating their advantage. The 1935 Wheeler-Rayburn act, which prevented electric companies from owning streetcars, which may have been promoted by car companies, also contributed. And the fact that judges undermined convictions for car companies creating illegal monopolies didn’t help. But more centrally, the decline was also a simple product of the fact that cities charged a franchise fee to operators based on gross revenue, while fares were left constant. Reflecting the earlier point about individual inflexibility, Peter Norton notes “the idea of the 5-cent fare had become ingrained as something of a birthright among many members of the public.” As margins shrank, the streetcar deals became uneconomical, and the companies folded.
But once a city is dominated by cars rather than transit, unless some other geographic feature makes it impossible, suburbia and sprawl are a natural result. And once large portions of the population live in low density areas, outside of the easy reach of public transit, the chance to embrace transit-centric solutions for the city is lost. Suburbanites won’t visit the city center if it’s inaccessible by car, repurposing streets for bicycles and public transit becomes politically impossible. On the other hand, if most people rely on transit, repurposing streets into bike lanes and tram lines can further cement the unfriendliness to cars.
Leaving Inadequate Equilibria
As noted earlier, inflexibility is a very deeply embedded human problem, and as we see from city design, it extends far past personal decisions. Eliezer Yudkowsky’s concept of inadequate equilibria makes this clear; it explains that most systems end up locked in situations worse than simple minority veto of improvements, where no single party can even improve the situation. This happens when more value is accessible only if many groups coordinate on a change - or when there is enough churn and instability in the system to move out of previous stable equilibria.
Eric Desjardins’ beautiful essay, “Reflections on Path Dependence and Irreversibility: Lessons from Evolutionary Biology,” explains that path dependence is not necessarily related to irreversibility - but many have argued, or at least assumed, the opposite. The implication of the incorrect reasoning is that once we’re stuck in an inadequate equilibrium, we can’t emerge from it. But despite the earlier examples of path dependence creating structural lock-in, In fact, these situations are often altered - either intentionally or chaotically.
A decade ago, Sam Bhagwat made an argument on Ribbonfarm that there is a tradeoff between replaceability and economic equilibrium. That is, people have a desire “to make oneself irreplaceable” - a difficult task when systems seek equilibrium and commoditization and simplicity are usually locally optimal. If you could be fired and replaced by anyone, your work isn’t going to be personally meaningful. (And this seems like a strong reason to pursue meaning in interpersonal relationships - the strategy of most of humanity.) But, Bhagwat argues, “specific types of big systems are path-dependent and tend not to equilibrate” - leading to opportunity for individuals to fight commoditization.
This seems to provide another reason that things end up sub-par and path dependent; people (indirectly) want it that way. Stasis is comfortable, but adequate equilibria leads to commoditized meaninglessness, so we might be tempted to say that instability and inadequacy is our true goal. But despite the cleverness of the extension to Bhagwat’s claim, this is wrong. As an avowed mistake theorist, in general I will maintain that people do not conspire to have a dysfunctional world - it’s accidental. People just take advantage - riding local incentive gradients to create meaning. And this illustrates a possible problem with local incentives undermining global change.
Drive to Success, or Drive to Utopia?
One example of the drive to instability is Schumpeter's idea of creative destruction - that innovators displacing established firms is a key economic driver of innovation. Advocates of this argue that the destruction and supplanting of industries is necessary for innovation. That is, monopoly profits and stable systems create the conditions needed for innovation to undercut prices and displace the previous winners. But even here, I suspect that even the most strident advocates do not think that disruption qua disruption is positive. Instability is a necessary cost, not a goal.
In Silicon Valley, usefulness (i.e. profits) is created not from permanent inadequacy of systems, but from each generation of entrepreneurs supplanting the previous generation. But this isn’t the full picture - the mission of Silicon Valley startups is never just to displace incumbents - it’s to make the world better. Jim Collins’ “Build to Last” lionized startups, and suggested each has a Big Hairy Audacious Goal - “Think of the NASA moon mission of the 1960s. The best BHAGs require both building for the long term AND exuding a relentless sense of urgency.” This allows the individuals Bhagwat was discussing to create meaning by creating goals, by finding out what BHAG addresses Bhagwat’s question1.
A generation of web entrepreneurs said the internet was supposed to usher in a new era of democracy, openness, and communication. Google wanted to organize the world’s information, without being evil. Uber plans to ignite opportunity by setting the world in motion. Meta wants to give people the power to build community and bring the world closer together. Amazon wants to be earth's most customer-centric company, best employer, and safest place to work - all at the same time. And the ideals only get more grandiose from there. LinkedIn wants to create economic opportunity for every member of the global workforce. Tesla wants to accelerate the world’s transition to sustainable energy. Patagonia says it is in business to save our home planet.
Each of these is trying to escape a huge inadequacy, fix the problems, and rebuild their industry, or the world. Are these pathways to Utopia? Perhaps, but the visions are for improvements far short of utopia - and not at all the type of heterotoptian outcomes that Yalçintaş or Clayborned imagined. On the other hand, they are avoiding the problems of getting stuck with only Pareto improvements, or having no tractable path.
This approach to improvement seems deeply unsatisfying because it seems to move slowly, and locks in the oppressive hand of capitalism overoptimization, providing growth at the cost of human values. And to the extent they are successful, they reach new local equilibria without significant changes to the rest of society. That is, the drive to improvement without coordination and understanding of goals is inevitably going to lead to inadequate equilibria.
Non-marginal improvements?
Paretotropism refers to improvements, in the terminology of Eric Drexler and Mark Miller, which moves towards the really big improvements that are possible. In the below left diagram, Mark has collapsed the N-dimensional tradeoff space of all the actors to a 2-player version, and explained that there’s a large region where everyone wins. The below right diagram, on the other hand, shows that there’s an area where we need cooperation, where collective action is necessary.
The contention of Paretotropians is that there are enough physical resources and achievable technological and sociological changes within reach to allow some outcome far better than those which need collective action, either the top right of the diagram, or far past it in the same direction. But getting there requires jumping over a large gap of hard problems which were discussed - and such large changes induce significant instability, or at least uncertainty. The question that remains is how to get there - which requires returning to our actual topic, cooperation.
The theory is that there are such large gains that negative-sum competition would prevent that we should be able to keep everyone on the same page. And this logic has mostly been enough in areas where there are clear global benefits, and large enough downsides.
For example, there are treaties banning military use of outer space. Obviously this is a very general ban, but in relevant areas, it’s critical. Satellite warfare is a drastically offense dominant domain, since destroying them is far, far easier than defending them - if defense is practical at all, which is unclear to me. And the externalities of warfare in these orbits is horrific; if satellites are destroyed, it is likely that the density of debris in low Earth orbit quickly reaches a critical point, causing additional collisions to generate more debris, which in turn leads to further collisions. This cascade effect, known as Kessler Syndrome, would make space launches and operations in affected orbits increasingly difficult or even impossible - denying the benefits of satellites to everyone.
A similar but not identical issue is the undersea cable network, which is critical infrastructure for global communication. Again, defending it is effectively impossible - but there is less lock-in if things escalate. These cables are frequently damaged by accident. The “International Tribunal for the Law of the Sea” is a body empowered to deal with this, but most of the problems are dealt with bilaterally. Unfortunately, Recent actions indicate that some countries are sabotaging them - a dangerous precedent.
Nuclear power is another example; nuclear weapon proliferation poses danger to geopolitical stability, with the ever-present threat of nuclear war. Systems were put in place to allow many countries to have reactors while monitoring them for misuse, enrichment, or diversion of weaponizable material. This regime has been less than perfectly successful, but was shockingly successful compared to expectations in the 1950s and 1960s, as I often point out.
In all three cases, there were local incentives to build the globally advantageous technology, and global incentives not to defect. Even given that, in all three cases, international consensus about what was not allowed, and the development of norms to reinforce them, was critical. To return to our overall theme, I will claim that global telecommunications, in the first two examples, and clean power, in the third, are at least examples of very large systemic changes enabled by such local following of gradients. This is an existence prof, of a sort - but as pointed out in the discussion of inadequate equilibria, I don’t claim that every way local gradients are followed leads to positive change. And at the very least, it took norms and shaping of the system to enable these successes.
A potentially key question, then, is whether pareto-topian outcomes have similar structures, or other relatively solvable ones - or whether large scale changes to achieve significantly better outcomes require revolutions. And that’s something we will certainly return to in later posts, discussing when and how local improvements can enable advances that are not just quantum - that is, abrupt and tiny, despite sounding monumental - but substantive shifts. And my answer is that this is complicated, and I’m still working it out - but I have further thoughts to share.
Conclusion
The examples of global telecommunications and clean power show us that big, positive changes can happen when local incentives align with global cooperation. But this isn’t always the case. Systems can get stuck, and following local gradients doesn’t guarantee great outcomes. As we’ve seen, the interplay of path dependence, inadequate equilibria, and the shaping of norms means that it often takes some mix of deliberate effort, foresight, strong norms, and luck to make things work at scale. The real question is what we should expect, and whether we can use these lessons to enable utopia. Can we escape inadequate equilibria and aim for real, transformative shifts instead of just tinkering around the edges? I’m not certain, but it’s messy and complicated, and there’s a lot to unpack.
Before we get back to the big picture, though, the next post will take a brief digression into modern history. I want to briefly contrast historical contingency and inevitability. These reflections will set the stage for returning to the broader themes of cooperation, systemic change, and understanding the possible future. But to explore these ideas, I’ll finally address the technological elephant in the room: AI and large language models. Given that the plausible default pathway is catastrophe if not extinction, we need to figure out if and how historical contingency and technological development can go not just well, but even end up somewhere in the realm of achievable hetero/pareto-topia. Humanity muddling through the risks and avoiding disaster with continued “normalcy” is perhaps the easiest future to imagine, but given rapid changes, current tends in growth, and the structure of catastrophes, extremes seem far more likely.
Given that, the more we know about the conditions under which transformative shifts have occurred, and the dynamics involved, the better. So we want to understand, or at least describe, what types of change that are more likely to become inevitable or systemic, and what role technology and norms play in shaping those trajectories. That means our discussion of the future of cooperation will continue with the past - for just one more post - which I won't be posting on the forum, so feel free to subscribe.
I've been writing an ongoing series of Substack posts with a "Big History" view of cooperation - and I was encouraged to post at least some of the posts here. We left off the prior post with a claim that anarchy might be possible as a vision for utopia, even if it’s not achievable. That brings us to a more general version of the same question, following which I'll discuss cognitive inflexibility, historical path dependence, inevitability, inadequate equilibria, and paretotropism. I hope you enjoy!
Is Utopia Possible?
There are two different questions which could be meant when asking if Utopia is possible. The first question is whether there exists a state that the overwhelming supermajority of humans today would agree is tremendously better than the world today. The second question is whether such a state is achievable starting from the present while requiring neither infeasible levels of sacrifice by large groups, nor an interim state in which the world is significantly worse than today. For today’s post, I’ll try to explain why I think the answer to the first is obviously yes, then claim the answer to the second is less certain, but plausibly yes as well.
If we’re not restricting the type of improvement, isn’t the answer to “can things improve” obviously yes? We have some states of the world that are better than others, and moving in that direction should improve things. This fails for two reasons, namely nonlinearity, and Pareto incomparability - but both are usually solvable.
Nonlinearity is a problem because even if there exists some state of the world that is better, we may be close to a local maximum, and getting to the better state involves making things worse. Imagine we’re in the space of possible games, and we are playing hearts, but want to switch to playing poker. Obviously, if we can switch games all at once, it’s easy - but there’s not obviously a single rule we can change that improves the game. And this, at scale, is the reason that the norms changes which Clayborne wanted in the last post aren’t tractable.
Pareto incomparability is a different type of problem, involving the number of people with different preferences. As we explained in the past, the heterogeneity of preferences is actually a critical enabler for cooperation, but requiring strict Pareto improvements is too strict to allow lots of overall improvements. And certainly not all changes are improvements, but all improvements are necessarily changes - and in large systems, changes have lots of effects. So even if there are much better overall situations, there will almost inevitably be some people who lose out.
So there are two clear mathematical obstacles to finding changes - which may seem abstract. But looking at them together, we find a host of real-world issues. For example, most generally beneficial changes are opposed by small concentrated groups who would lose out. For example, it’s why we have a broken tax system in the US. Intuit, the makers of Turbotax, wants the tax code to stay complex so they can make money charging people to file their taxes. Any change in the tax code which simplifies it greatly would help the public, but it’s not a Pareto improvement.
Inflexibility
When I started my first job, there were copies of a book, “Who Moved My Cheese” - an extended analogy and morality tale about mice who were upset on day to find that the cheese in their maze had been moved. They faced the choice of complaining about the moved cheese, acting as it if never moved, or adapting to reality. It was the source of mockery, since the point being made was so blindingly obvious - but over the couple of years I worked in the investment bank, I realized why senior management had bought them.
Even as several colleagues and managers made fun of the book, they failed to absorb the lesson. Legacy systems were maintained and run because people refused to use the replacements, processes that could be automated were done manually, and “the way we usually do things” was often used as a complete and sufficient explanation. The consequent inflexibility was maddening - no wonder management wanted to change things.
Of course, it’s possible that this is inevitable; no matter what system is in place, people end up adapting to it. And this isn’t even necessarily limited to people - we should expect that systems in general evolve towards stability in ways that prejudice them towards repeating rather than changing. Homeostasis is a central achievement of biology, and even of ecological systems. Repetition allows predictability and stability, which allows planning on an individual level, profit taking on a corporate level, and expansion and further changes on a species or system level. Which underscores that while seeking homeostasis is understandable and useful, it’s not the best long-term strategy.
The challenges of inflexibility highlight how systems and individuals often resist change, even when it’s beneficial. However, this resistance isn’t just a matter of personal habits or institutional inertia—it’s also about how past decisions shape the present. And given our framing, of trying to understand why large-scale shifts toward utopia may be difficult, we need to examine the concept of path dependence, where historical choices lock us into trajectories that are hard to escape. This idea partly explains why change is resisted, and sets the stage for discussing how contingent events can have outsized impacts on our systems, before eventually returning to our topic.
Path Dependence
The example usually used to discuss path dependence is the Qwerty keyboard; the keyboard layout that we use is poorly suited to the task of optimizing input. The essential reason is that there is a feedback cycle, and whichever keyboard layout gains adoption at some past point builds a userbase that creates a self-reinforcing cycle where people who know how to type prefer the keyboard they know, so that new users end up with the same layout.
One way to think about this is chaos theory. Polya’s urn provides a motivating example. You have an urn (a large container) with two types of balls in it, red and blue. You start with 1 red ball and 1 blue ball. You then draw a ball randomly, and if you draw a red ball, you put the red ball back in the urn, but you also add an extra red ball to the urn. Similarly, if you draw a blue ball, you put the blue ball back in, but you add an extra blue ball to the urn. You repeat this process over and over. Each time, you increase the proportion of the color of the ball you just drew.
At first, the system might seem balanced. You start with 50% red and 50% blue. But because the system is path-dependent (meaning the outcome depends on the sequence of draws), small differences at the beginning can snowball into much larger effects later on. If you happen to draw a red ball first, now the urn has 2 red balls and 1 blue ball. This increases the chance of drawing red again. If you draw red again - which is more likely than not - you now have 3 red balls and 1 blue ball. And this feedback loop can quickly result in a "lock-in" effect, where one color dominates. On the other hand, if you had randomly drawn blue at the start, the urn's proportions would favor blue, possibly leading to a completely different outcome.
The analogy to the keyboard layout is clear - but historical path dependence argues more than this. In historical discussions, path dependence claims that contingent factors can dictate the path taken, so that where we are now is a function of the past. That doesn’t mean that having arrived at the same point, the path matters, it instead means that history is chaotic, and even if we’re stuck with the result, it was not inevitable.
Utopian Path Dependence?
Altuğ Yalçintaş has a paper entitled “Historical Small Events and the Eclipse of Utopia: Perspectives on Path Dependence in Human Thought,” in which he quotes James Gleick and argues that history has a chaotic dependence on small events. To get there, he uses a variety of what are in my view unconvincing examples. “Path dependence is used in order to communicate a complaint about the historical condition of such institutions as VHS video systems or [Qwerty] keyboards.” The entire idea, he says, is “textured product of imagination aiming at telling the reader the undesirability of the historical condition that leads social institutions to underachieve.”
But the main discussion of the paper is leading up to a discussion of Thomas More’s Utopia. And the point being made was that “the metaphor of Utopia had implied that it was possible to discover the ideal thinking systems… The book inspired changing the world with a faith in human perfection.” I will refrain from an extended diatribe, but should mention at the very least that books urging humanity to build a perfect world should start with the Bible; More’s Utopia was a late addition to a long history of books urging such perfection.
Despite the overwhelmingly overdetermined fact of human idealization of the future, Yalçintaş claimed that More’s Utopia was exactly the class of historical aberration which causes underachievement. That is, the idea of utopia was a random event that had an outsized impact. And foreshadowing the anarchic heterotopia mentioned in the previous post which was discussed by Clayborne, he concluded that “the more we are free from this utopian ideal, the more likely we are to free ourselves.”
But to disagree with the example, there is almost nothing more overdetermined than human striving for perfection and utopia, and chaotic systems are not sufficient to refute or change this. And to address why, we should address a misunderstanding about what chaotic systems and path dependence proves. Specifically, while some systems are chaotic, it takes a misunderstanding to view chaos as evidence that the entirety of the system is contingent. Weather, for example, is the chaotic system par excellence - but it’s eminently predictable over the long term, as climate. Local rainfall on a given day is a product of chaotic factors, the average rainfall in a region is remarkably steady. I can’t tell you the temperature on a given day 5 years from now, but I can safely predict, being December, that New York will be cold, and Australia will be warm.
An example from Yalçintaş’s essay riffs on Edmund Wilson’s “To the Finland Station.” Wilson portrayed Communism as an inevitable culmination of history. Yalçintaş, however, suggests that if Lenin’s brother had not been executed by the Tsar, and then he had not started reading Marx by chance in university, perhaps he never would have arrived at Finland Station early on in the Russian revolution. But even with real chaos, there is often a larger-scale predictability about these complex systems. Not in the details, but in the broad strokes. I don’t think that Yalçintaş would argue that the various communist revolutions couldn’t have occurred without Lenin, but their path could certainly have been changed.
Historical happenstance has an impact - but I’d say that the overall trend driven by economic and social forces are likely to be unchanged. That is, some form of revolution in states with widening wealth gaps would have been likely, even if the details of the revolutions would likely have differed if Lenin had never championed the Communist cause. Similarly, the revolutions might have led to less disastrous economic outcomes if Marx had not formulated his labor theory of value.
Equilibria Are Not Inevitable (Neither are Cars)
The claim above, that the overall trend driven by economic and social forces are inevitable, is a limited one. That is, as the suggestion about the different possible paths for communism suggest, even where economic forces are similar, and technological capabilities are the same, very different outcomes are possible.
To take a less speculative example, we can compare modes of transport; different cities have very different balances of public and private transit. In many cities, especially in the United States, the notion of voluntarily being carless and relying entirely on public transit is laughable. But many New Yorkers feel the opposite - owning a car is a burden. And the same is true in many large, dense cities internationally. There are many cities where a bicycle is more convenient than a car, and where this happened is partly historical accident, but also partly a design choice.
Small changes can create immense shifts in the eventual outcome - the dismantling of electric trams during the great depression may not have been a grand conspiracy. Cities which did not enforce a right-of-way for trams had cars make them subject to the same traffic, eliminating their advantage. The 1935 Wheeler-Rayburn act, which prevented electric companies from owning streetcars, which may have been promoted by car companies, also contributed. And the fact that judges undermined convictions for car companies creating illegal monopolies didn’t help. But more centrally, the decline was also a simple product of the fact that cities charged a franchise fee to operators based on gross revenue, while fares were left constant. Reflecting the earlier point about individual inflexibility, Peter Norton notes “the idea of the 5-cent fare had become ingrained as something of a birthright among many members of the public.” As margins shrank, the streetcar deals became uneconomical, and the companies folded.
But once a city is dominated by cars rather than transit, unless some other geographic feature makes it impossible, suburbia and sprawl are a natural result. And once large portions of the population live in low density areas, outside of the easy reach of public transit, the chance to embrace transit-centric solutions for the city is lost. Suburbanites won’t visit the city center if it’s inaccessible by car, repurposing streets for bicycles and public transit becomes politically impossible. On the other hand, if most people rely on transit, repurposing streets into bike lanes and tram lines can further cement the unfriendliness to cars.
Leaving Inadequate Equilibria
As noted earlier, inflexibility is a very deeply embedded human problem, and as we see from city design, it extends far past personal decisions. Eliezer Yudkowsky’s concept of inadequate equilibria makes this clear; it explains that most systems end up locked in situations worse than simple minority veto of improvements, where no single party can even improve the situation. This happens when more value is accessible only if many groups coordinate on a change - or when there is enough churn and instability in the system to move out of previous stable equilibria.
Eric Desjardins’ beautiful essay, “Reflections on Path Dependence and Irreversibility: Lessons from Evolutionary Biology,” explains that path dependence is not necessarily related to irreversibility - but many have argued, or at least assumed, the opposite. The implication of the incorrect reasoning is that once we’re stuck in an inadequate equilibrium, we can’t emerge from it. But despite the earlier examples of path dependence creating structural lock-in, In fact, these situations are often altered - either intentionally or chaotically.
A decade ago, Sam Bhagwat made an argument on Ribbonfarm that there is a tradeoff between replaceability and economic equilibrium. That is, people have a desire “to make oneself irreplaceable” - a difficult task when systems seek equilibrium and commoditization and simplicity are usually locally optimal. If you could be fired and replaced by anyone, your work isn’t going to be personally meaningful. (And this seems like a strong reason to pursue meaning in interpersonal relationships - the strategy of most of humanity.) But, Bhagwat argues, “specific types of big systems are path-dependent and tend not to equilibrate” - leading to opportunity for individuals to fight commoditization.
This seems to provide another reason that things end up sub-par and path dependent; people (indirectly) want it that way. Stasis is comfortable, but adequate equilibria leads to commoditized meaninglessness, so we might be tempted to say that instability and inadequacy is our true goal. But despite the cleverness of the extension to Bhagwat’s claim, this is wrong. As an avowed mistake theorist, in general I will maintain that people do not conspire to have a dysfunctional world - it’s accidental. People just take advantage - riding local incentive gradients to create meaning. And this illustrates a possible problem with local incentives undermining global change.
Drive to Success, or Drive to Utopia?
One example of the drive to instability is Schumpeter's idea of creative destruction - that innovators displacing established firms is a key economic driver of innovation. Advocates of this argue that the destruction and supplanting of industries is necessary for innovation. That is, monopoly profits and stable systems create the conditions needed for innovation to undercut prices and displace the previous winners. But even here, I suspect that even the most strident advocates do not think that disruption qua disruption is positive. Instability is a necessary cost, not a goal.
In Silicon Valley, usefulness (i.e. profits) is created not from permanent inadequacy of systems, but from each generation of entrepreneurs supplanting the previous generation. But this isn’t the full picture - the mission of Silicon Valley startups is never just to displace incumbents - it’s to make the world better. Jim Collins’ “Build to Last” lionized startups, and suggested each has a Big Hairy Audacious Goal - “Think of the NASA moon mission of the 1960s. The best BHAGs require both building for the long term AND exuding a relentless sense of urgency.” This allows the individuals Bhagwat was discussing to create meaning by creating goals, by finding out what BHAG addresses Bhagwat’s question1.
A generation of web entrepreneurs said the internet was supposed to usher in a new era of democracy, openness, and communication. Google wanted to organize the world’s information, without being evil. Uber plans to ignite opportunity by setting the world in motion. Meta wants to give people the power to build community and bring the world closer together. Amazon wants to be earth's most customer-centric company, best employer, and safest place to work - all at the same time. And the ideals only get more grandiose from there. LinkedIn wants to create economic opportunity for every member of the global workforce. Tesla wants to accelerate the world’s transition to sustainable energy. Patagonia says it is in business to save our home planet.
Each of these is trying to escape a huge inadequacy, fix the problems, and rebuild their industry, or the world. Are these pathways to Utopia? Perhaps, but the visions are for improvements far short of utopia - and not at all the type of heterotoptian outcomes that Yalçintaş or Clayborned imagined. On the other hand, they are avoiding the problems of getting stuck with only Pareto improvements, or having no tractable path.
This approach to improvement seems deeply unsatisfying because it seems to move slowly, and locks in the oppressive hand of capitalism overoptimization, providing growth at the cost of human values. And to the extent they are successful, they reach new local equilibria without significant changes to the rest of society. That is, the drive to improvement without coordination and understanding of goals is inevitably going to lead to inadequate equilibria.
Non-marginal improvements?
Paretotropism refers to improvements, in the terminology of Eric Drexler and Mark Miller, which moves towards the really big improvements that are possible. In the below left diagram, Mark has collapsed the N-dimensional tradeoff space of all the actors to a 2-player version, and explained that there’s a large region where everyone wins. The below right diagram, on the other hand, shows that there’s an area where we need cooperation, where collective action is necessary.
The contention of Paretotropians is that there are enough physical resources and achievable technological and sociological changes within reach to allow some outcome far better than those which need collective action, either the top right of the diagram, or far past it in the same direction. But getting there requires jumping over a large gap of hard problems which were discussed - and such large changes induce significant instability, or at least uncertainty. The question that remains is how to get there - which requires returning to our actual topic, cooperation.
The theory is that there are such large gains that negative-sum competition would prevent that we should be able to keep everyone on the same page. And this logic has mostly been enough in areas where there are clear global benefits, and large enough downsides.
For example, there are treaties banning military use of outer space. Obviously this is a very general ban, but in relevant areas, it’s critical. Satellite warfare is a drastically offense dominant domain, since destroying them is far, far easier than defending them - if defense is practical at all, which is unclear to me. And the externalities of warfare in these orbits is horrific; if satellites are destroyed, it is likely that the density of debris in low Earth orbit quickly reaches a critical point, causing additional collisions to generate more debris, which in turn leads to further collisions. This cascade effect, known as Kessler Syndrome, would make space launches and operations in affected orbits increasingly difficult or even impossible - denying the benefits of satellites to everyone.
A similar but not identical issue is the undersea cable network, which is critical infrastructure for global communication. Again, defending it is effectively impossible - but there is less lock-in if things escalate. These cables are frequently damaged by accident. The “International Tribunal for the Law of the Sea” is a body empowered to deal with this, but most of the problems are dealt with bilaterally. Unfortunately, Recent actions indicate that some countries are sabotaging them - a dangerous precedent.
Nuclear power is another example; nuclear weapon proliferation poses danger to geopolitical stability, with the ever-present threat of nuclear war. Systems were put in place to allow many countries to have reactors while monitoring them for misuse, enrichment, or diversion of weaponizable material. This regime has been less than perfectly successful, but was shockingly successful compared to expectations in the 1950s and 1960s, as I often point out.
In all three cases, there were local incentives to build the globally advantageous technology, and global incentives not to defect. Even given that, in all three cases, international consensus about what was not allowed, and the development of norms to reinforce them, was critical. To return to our overall theme, I will claim that global telecommunications, in the first two examples, and clean power, in the third, are at least examples of very large systemic changes enabled by such local following of gradients. This is an existence prof, of a sort - but as pointed out in the discussion of inadequate equilibria, I don’t claim that every way local gradients are followed leads to positive change. And at the very least, it took norms and shaping of the system to enable these successes.
A potentially key question, then, is whether pareto-topian outcomes have similar structures, or other relatively solvable ones - or whether large scale changes to achieve significantly better outcomes require revolutions. And that’s something we will certainly return to in later posts, discussing when and how local improvements can enable advances that are not just quantum - that is, abrupt and tiny, despite sounding monumental - but substantive shifts. And my answer is that this is complicated, and I’m still working it out - but I have further thoughts to share.
Conclusion
The examples of global telecommunications and clean power show us that big, positive changes can happen when local incentives align with global cooperation. But this isn’t always the case. Systems can get stuck, and following local gradients doesn’t guarantee great outcomes. As we’ve seen, the interplay of path dependence, inadequate equilibria, and the shaping of norms means that it often takes some mix of deliberate effort, foresight, strong norms, and luck to make things work at scale. The real question is what we should expect, and whether we can use these lessons to enable utopia. Can we escape inadequate equilibria and aim for real, transformative shifts instead of just tinkering around the edges? I’m not certain, but it’s messy and complicated, and there’s a lot to unpack.
Before we get back to the big picture, though, the next post will take a brief digression into modern history. I want to briefly contrast historical contingency and inevitability. These reflections will set the stage for returning to the broader themes of cooperation, systemic change, and understanding the possible future. But to explore these ideas, I’ll finally address the technological elephant in the room: AI and large language models. Given that the plausible default pathway is catastrophe if not extinction, we need to figure out if and how historical contingency and technological development can go not just well, but even end up somewhere in the realm of achievable hetero/pareto-topia. Humanity muddling through the risks and avoiding disaster with continued “normalcy” is perhaps the easiest future to imagine, but given rapid changes, current tends in growth, and the structure of catastrophes, extremes seem far more likely.
Given that, the more we know about the conditions under which transformative shifts have occurred, and the dynamics involved, the better. So we want to understand, or at least describe, what types of change that are more likely to become inevitable or systemic, and what role technology and norms play in shaping those trajectories. That means our discussion of the future of cooperation will continue with the past - for just one more post - which I won't be posting on the forum, so feel free to subscribe.