In Strategic implications of AIs’ ability to coordinate at low cost, I talked about the possibility that different AGIs can coordinate with each other much more easily than humans can, by doing something like merging their utility functions together. It now occurs to me that another way for AGIs to greatly reduce coordination costs in an economy is by having each AGI or copies of each AGI profitably take over much larger chunks of the economy (than companies currently own), and this can be done with AGIs that don't even have explicit utility functions, such as copies of an AGI that are all corrigible/intent-aligned to a single person.
Today, there are many industries with large economies of scale, due to things like fixed costs, network effects, and reduced deadweight loss when monopolies in different industries merge (because they can internally charge each other prices that equal marginal costs), but because coordination costs among humans increase super-linearly with the number of people involved (see Moral Mazes and Short Termism for a related recent discussion), that creates diseconomies of scale which counterbalance the economies of scale, so companies tend to grow to a certain size and then stop. But an AGI-operated company, where for example all the workers are AGIs that are intent-aligned to the CEO, would eliminate almost all of the internal coordination costs (i.e., all of the coordination costs that are caused by value differences, such as all the things described in Moral Mazes, "market for lemons" or lost opportunities for trade due to asymmetric information, principal-agent problems, monitoring/auditing costs, costly signaling, and suboptimal Nash equilibria in general), allowing such companies to grow much bigger. In fact, from purely the perspective of maximizing the efficiency/output of an economy, I don't see why it wouldn't be best to have (copies of) one AGI control everything.
If I'm right about this, it seems quite plausible that some countries will foresee it too, and as soon as it can feasibly be done, nationalize all of their productive resources and place them under the control of one AGI (perhaps intent-aligned to a supreme leader or to a small, highly coordinated group of humans), which would allow them to out-compete any other countries that are not willing to do this (and don't have some other competitive advantage to compensate for this disadvantage). This seems to be an important consideration that is missing from many people's pictures of what will happen after (e.g., intent-aligned) AGI is developed in a slow-takeoff scenario.
Planned summary:
Economies of scale would normally mean that companies would keep growing larger and larger. With human employees, the coordination costs grow superlinearly, which ends up limiting the size to which a company can grow. However, with the advent of AGI, many of these coordination costs will be removed. If we can align AGIs to particular humans, then a corporation run by AGIs aligned to a single human would at least avoid principal-agent costs. As a result, the economies of scale would dominate, and companies would grow much larger, leading to more centralization.
Planned opinion:
This argument is quite compelling to me under the assumption of human-level AGI systems that can be intent-aligned. Note though that while the development of AGI systems removes principal-agent problems, it doesn't remove issues that arise due to information asymmetry.
It does seem like this doesn't hold with something like CAIS, where each AI service is optimized for a particular task, since there likely will be principal-agent problems between services.
It seems like the argument should mainly make us more worried about stable authoritarian regimes: the main effect based on this argument is a centralization of power in the hands of the AGI's overseers. This won't happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn't seem to be a strong reason to expect that to stop. It could happen with government, but if long-term governmental power still rests with the people via democracy, that seems okay. So the risky situation seems to be when the government gains power, and the people no longer have effective control over government. (This would include scenarios with e.g. a government that has sufficiently good AI-fueled propaganda that they always win elections, regardless of whether their governing is actually good.)
Oh, right, I forgot we were considering the setting where we already have AGI systems that can be intent aligned. This seems like a plausible story, though it only implies that there is centralization within the corrupted nation.