Richard Ngo recently wrote

Instead of analyzing whether AI takeoff will be “fast” or “slow”, I now prefer to think about the spectrum from concentrated takeoff (within one organization in one country) to distributed takeoff (involving many organizations and countries).

I agree that this element of AI takeoff is highly important. The degree of AI concentration, or conversely, the extent of AI diffusion, is crucial to predicting how AI technologies will unfold and the specific risks they might entail. Therefore, understanding this variable is important not only for forecasting AI’s trajectory but also for informing the policies that should govern its development.

However, I believe that discussions around AI concentration tend to lack clarity that is essential to distinguish what the speaker is referring to. In many cases, when people refer to the concentration of AI, they aren't specific enough about what exactly they mean by "concentration." This vagueness can lead to confusion and miscommunication. To address this, I want to propose three distinct dimensions of AI concentration, which I believe are often conflated but should be treated separately in any serious discussion of the topic.

These dimensions are:

  1. The concentration of AI development itself. Here, the question is: to what extent is the development of cutting-edge AI models dominated by a small number of actors? If a handful of companies or governments are responsible for the majority of significant AI innovations, or the majority of state-of-the-art models, then we would say AI development is highly concentrated. This can be quantitatively assessed by looking at the market share or share of technical contributions of the top AI developers, whether measured in terms of compute resources, revenue, or research breakthroughs. Conversely, if many organizations are actively contributing to AI advancements, then development is more diffuse.
  2. The concentration of AI service providers. Even if AI development is monopolized by a few key players, this would not necessarily mean that AI services are similarly concentrated. The developers of AI models might license their technologies to numerous companies, who in turn host these models on their servers and make them accessible to a broad range of users. In this scenario, while the models originate from a small number of firms, the provisioning of AI services is decentralized, with many independent providers offering access to the AIs that merely derive from a few concentrated sources.
  3. The concentration of control over AI services. This focuses on who has the power to direct AI systems and determine what tasks they perform. At one extreme, control could be highly centralized—for instance, this could occur if all major AI systems were under the command of a single government or individual, such as a scenario in which all relevant AI systems obey the direct orders of the U.S. president, and are specifically under their chain of command. At the other extreme, control could be highly decentralized, with billions of individual users having the ability to dictate how AI systems are deployed, whether by renting AI capabilities for specific tasks or by interacting directly with service providers, e.g. through an API or a chat interface. This end of the spectrum could also be realized if there are billions of distinct AI agents who autonomously pursue their own unique, individual objectives that are separate from each other.

These three axes—development, service provisioning, and control—are conceptually distinct, and each can vary independently of the others. For example, AI development might be concentrated in the hands of a few large organizations, but those organizations could still distribute their models widely, allowing for a more decentralized ecosystem of service providers. Alternatively, you could have a situation where AI services are concentrated under a small number of providers, but users retain considerable autonomy over how those services are used, and how AIs are fine-tuned to match individual preferences, leading to a decentralized form of control.

Based on the current state of AI, I believe we are witnessing clear indications that AI development is becoming concentrated, with a small number of actors leading the way in producing the most advanced models. There is also moderately strong evidence that the provisioning of AI services is consolidating, as larger players build out vast AI infrastructures. However, when it comes to control, the picture seems to be more diffuse, with a large number of users likely retaining substantial power over how AI systems are applied through their usage and demand for the technology.

These distinctions matter a great deal. Without clearly distinguishing between these different dimensions of AI concentration, we risk talking past one another. For instance, one person might argue that AI is highly concentrated (because a few firms dominate development), while another might claim that AI is highly decentralized (because billions of users ultimately have control over how the technology is used). Both could be correct, yet their conclusions might seem contradictory because they are referring to different axes of concentration.

This distinction between different forms of AI concentration is especially important when considering how it relates to the risks of AI misalignment. A common concern is that all powerful AIs in the world could effectively "merge" into a single, unified agent, acting in concert toward a singular goal. In this scenario, if the collective AI entity were misaligned with human values, it could potentially have strong incentives to violently seize control of the world or orchestrate a coup, posing a dire existential threat to humans. This vision of risk implicitly assumes a high degree of concentration in the third sense, where control over AI systems is centralized and tightly unified under one entity—the AI agent itself.

However, this outcome becomes less plausible if AI is concentrated only in the first or second sense—meaning that development or service provisioning is controlled by a small number of organizations, but control over what AIs actually do remains decentralized. If numerous actors, such as individual users, retain the ability to direct AI systems toward different tasks or goals, including through fine-tuning models, the risk of all AIs aligning under a single objective diminishes. In this more decentralized control structure, even if a few organizations dominate AI development or service infrastructure, the risk of a unified, misaligned super-agent that is more powerful than the entire rest of the world combined, becomes significantly less pressing.

Therefore, in discussions about the future of AI, it is crucial to be precise about which dimension of concentration we are referring to. Terms like "concentrated AI takeoff" and "distributed AI takeoff" are too ambiguous on their own. These words are not clear enough to pick out some highly important and policy-relevant features of the situation we find ourselves in. To help mitigate this issue, I suggest we adopt clearer language that differentiates between concentration in development, service provisioning, and control in order to have more meaningful conversations about the trajectory of AI.

New Comment
2 comments, sorted by Click to highlight new comments since:
[-]Dana10

I do not really understand your framing of these three "dimensions". The way I see it, they form a dependency chain. If either of the first two are concentrated, they can easily cut off access during takeoff (and I would expect this). If both of the first two are diffuse, the third will necessarily also be diffuse.

How could one control AI without access to the hardware/software? What would stop one with access to the hardware/software from controlling AI?

How could one control AI without access to the hardware/software? What would stop one with access to the hardware/software from controlling AI?

One would gain control by renting access to the model, i.e., the same way you can control what an instance of ChatGPT currently does. Here, I am referring to practical control over the actual behavior of the AI, when determining what the AI does, such as what tasks it performs, how it is fine-tuned, or what inputs are fed into the model.

This is not too dissimilar from the high level of practical control one can exercise over, for example, an AWS server that they rent. While Amazon may host these servers, and thereby have the final say over what happens to the computer in the case of a conflict, the company is nonetheless inherently dependent on customer revenue, implying that they cannot feasibly use all their servers privately for their own internal purposes. As a consequence of this practical constraint, Amazon rents these servers out to the public, and they do not substantially limit user control over AWS servers, providing for substantial discretion to end-users over what software is ultimately implemented.

In the future, these controls could also be determined by contracts and law, analogously to how one has control over their own bank account, despite the bank providing the service and hosting one's account. Then, even in the case of a conflict, the entity that merely hosts an AI may not have practical control over what happens, as they may have legal obligations to their customers that they cannot breach without incurring enormous costs to themselves. The AIs themselves may resist such a breach as well.

In practice, I agree these distinctions may be hard to recognize. There may be a case in which we thought that control over AI was decentralized, but in fact, power over the AIs was more concentrated or unified than we believed, as a consequence of centralization over the development or the provision of AI services. Indeed, perhaps real control was always in the hands of the government all along, as they could always choose to pass a law to nationalize AI, and take control away from the companies.

Nonetheless, these cases seem adequately described as a mistake in our perception of who was "really in control" rather than an error in the framework I provided, which was mostly an attempt to offer careful distinctions, rather than to predict how the future will go.

If one actor—such as OpenAI—can feasibly get away with seizing practical control over all the AIs they host without incurring high costs to the continuity of their business through loss of customers, then this indeed may surprise someone who assumed that OpenAI was operating under different constraints. However, this scenario still fits nicely within the framework as I've provided, as it merely describes a case in which one was mistaken about the true degree of concentration along one axis, rather than one of my concepts intrinsically fitting reality poorly.