This post seems to misunderstand what it is responding to and underplay a very key point: that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
It mentions this offhand:
Given sufficiently strong AI, this is not a risk about insufficient material comfort.
But, this was a key thing people were claiming when arguing that money won't matter. They were claiming that personal savings will likely not be that important for guaranteeing a reasonable amount of material comfort (or that a tiny amount of personal savings will suffice).
It seems like there are two importantly different types of preferences:
Indeed, for scope sensitive preferences (that you expect won't be shared with whoever otherwise ends up with power), you want to maximize your power and insofar as money allows for more of this power (e.g. buying galaxies), then money looks good.
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn't clearly that important (for long run scope sensitive preferences), but I won't argue for this here (and the post does directly argue against this).
This post seems to misunderstand what it is responding to
fwiw, I see this post less as "responding" to something, and more laying out considerations on their own with some contrasting takes as a foil.
(On Substack, the title is "Capital, AGI, and human ambition", which is perhaps better)
that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
I agree with this, though I'd add: "if humans retain control" and some sufficient combination of culture/economics/politics/incentives continues opposing arbitrary despotism.
I also think that even if all material needs are met, avoiding social stasis and lock-in matters.
Scope sensitive preferences
Scope sensitivity of preferences is a key concept that matters here, thanks for pointing that out.
Various other considerations about types of preferences / things you can care about (presented without endorsement):
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
I agree with this on an individual level. (On an org level, I think philanthropic foundations might want to consider my arguments above for money buying more results soon, but this needs to be balanced against higher leverage on AI futures sooner rather than later.)
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn't clearly that important, but I won't argue for this here (and the post does directly argue against this).
Where do I directly argue against that? A big chunk of this post is pointing out how the shifting relative importance of capital v labour changes the incentives of states. By default, I expect states to remain the most important and powerful institutions, but the frame here is very much human v non-human inputs to power and what that means for humans, without any particular stance on how the non-human inputs are organised. I don't think states v companies v whatever fundamentally changes the dynamic; with labour-replacing AI power flows from data centres, other physical capital, and whoever has the financial capital to pay for it, and sidesteps humans doing work, and that is the shift I care about.
(However, I think which institutions do the bulk of decision-making re AI does matter for a lot of other reasons, and I'd be very curious to get your takes on that)
My guess is that the most fundamental disagreement here is about how much power tries to get away with when it can. My read of history leans towards: things are good for people when power is correlated with things being good for people, and otherwise not (though I think material abundance is very important too and always helps a lot). I am very skeptical of the stability of good worlds where incentives and selection pressures do not point towards human welfare.
For example, assuming a multipolar world where power flows from AI, the equilibrium is putting all your resources on AI competition and none on human welfare. I don't think it's anywhere near certain we actually reach that equilibrium, since sustained cooperation is possible (c.f. Ostrom's Governing the Commons), and since a fairly trivial fraction of the post-AGI economy's resources might suffice for human prosperity (and since maybe we in fact do get a singleton—but I'd have other issues with that). But this sort of concern still seems neglected and important to me.
Under log returns to money, personal savings still matter a lot for selfish preferences. Suppose the material comfort component of someone's utility is 0 utils at an consumption of $1/day. Then a moderately wealthy person consuming $1000/day today will be at 7 utils. The owner of a galaxy, at maybe $10^30 / day, will be at 69 utils, but doubling their resources will still add the same 0.69 utils it would for today's moderately wealthy person. So my guess is they will still try pretty hard at acquiring more resources, similarly to people in developed economies today who balk at their income being halved and see it as a pretty extreme sacrifice.
When people such as myself say "money won't matter post-AGI" the claim is NOT that the economy post-AGI won't involve money (though that might be true) but rather that the strategy of saving money in order to spend it after AGI is a bad strategy. Here are some reasons:
My main default prediction here is that we will avoid either the absolute best case or the absolute worst case scenario, because I predict intent alignment works well enough to avoid extinction of humanity type scenarios, but I also don't believe we will see radical movements toward equality (indeed the politics of our era is moving towards greater acceptance of inequality), so capitalism more or less survives the transition to AGI.
I do think dynamism will still exist, but it will be very limited to the upper classes/very rich of society, and most people will not be a part of it, and I'm including uploaded humans here in this calculation.
To address this:
Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)
To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than today, because there is less reason to tolerate other people's values if you can just found your own society based on your own values, and if you believe in the vulnerable world hypothesis, as a lot of rationalists do, then conflict has existential stakes, and even if not, can be quite bad, so one group controlling the future is better than inevitable conflict.
At a foundational level, whether or not our current tolerance for differing values is stable ultimately comes down to we can compensate for the effect of AGI allowing people to make their own society.
Comment is also on substack:
https://nosetgauge.substack.com/p/capital-agi-and-human-ambition/comment/83401326
To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than today, because there is less reason to tolerate other people's values if you can just found your own society based on your own values, and if you believe in the vulnerable world hypothesis, as a lot of rationalists do, then conflict has existential stakes, and even if not, can be quite bad, so one group controlling the future is better than inevitable conflict.
So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point!
At a foundational level, whether or not our current tolerance for differing values is stable ultimately comes down to we can compensate for the effect of AGI allowing people to make their own society.
Considerations:
The latter might be more of a "singleton that allows playgrounds" rather an actual multipolar world though.
Some of my general worries with singleton worlds are:
Comment is also on substack:
Thanks!
So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point!
(I also commented on substack)
This applies, but weaker even in a non-vulnerable world, because the incentives are way weaker for peaceful cooperation of values in AGI-world.
Considerations:
- offense/defense balance (if offense wins very hard, it's harder to let everyone do their own thing)
- tunability-of-AGI-power / implementability of the harm principle (if you can give everyone AGI that can follow very well the rule "don't let these people harm other people", then you can give that AGI safely to everyone and they can build planets however they like but not death ray anyone else's planets)
I do think this requires severely restraining open-source, but conditional on that happening, I think the offense-defense balance/tunability will sort of work out.
Some of my general worries with singleton worlds are:
- humanity has all its eggs in one basket—you better hope the governance structure is never corrupted, or never becomes sclerotic; real-life institutions so far have not given me many signs of hope on this count
- cultural evolution is a pretty big part of how human societies seem to have improved and relies on a population of cultures / polities
- vague instincts towards diversity being good and less fragile than homogeneity or centralisation
Yeah, I'm not a fan of singleton worlds, and tend towards multipolar worlds. It's just that it might involve a loss of a lot of life in the power-struggles around AGI.
On governing the commons, I'd say Elinor Ostrom's observations are derivable from the folk theorems of game theory, which basically says that any outcome can be a Nash Equilibrium (with a few conditions that depend on the theorem) can be possible if the game is repeated and players have to deal with each other.
The problem is that AGI weakens the incentives for players to deal with each other, so Elinor Ostrom's solutions are much less effective.
More here:
(indeed the politics of our era is moving towards greater acceptance of inequality)
How certain are you of this, and how much do you think it comes down more to something like "to what extent can disempowered groups unionise against the elite?".
To be clear, by default I think AI will make unionising against the more powerful harder, but it might depend on the governance structure. Maybe if we are really careful, we can get something closer to "Direct Democracy", where individual preferences actually matter more!
I am focused here on short-term politics in the US, which ordinarily would matter less, if it wasn't likely that world-changing AI would be built in the US, but given that it might, it becomes way more important than normal.
Excellent post, thank you. I appreciate your novel perspective on how AI might affect society.
I feel like a lot of LessWrong-style posts follow the theme of "AGI is created and then everyone dies" which is an important possibility but might lead to other possibilities being neglected.
Whereas this post explores a range of scenarios and describes a mainline scenario that seems like a straightforward extrapolation of trends we've seen unfolding over the past several decades.
You say: I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them.
It seems to me that this aggregates quite different things, at least if looking at the situation in terms of personal finance. Consider four people who have the following investments, that let's suppose are currently of equal value:
These are all "capital", but will I think fare rather differently in an AI future.
As always, there's no guarantee that the money will retain its value - that depends as usual on central bank actions - and I think it's especially likely that it loses its value in an AI future (crypto currencies as well). Why would an AI want to transfer resources to someone just because they have some fiat currency? Surely they have some better way of coordinating exchanges.
The nuclear power plant, in contrast, is directly powering the AIs, and should be quite valuable, since the AIs are valuable. This assumes, of course, that the company retains ownership. It's possible that it instead ends up belonging to whatever AI has the best military robots.
The nuts and bolts company may retain and even gain some value when AI dominates, if it is nimble in adapting, since the value of AI in making its operations more efficient will typically (in a market economy) be split between the AI company and the nuts and bolts company. (I assume that even AIs need nuts and bolts.)
The recruitment company is toast.
Important other types of capital, as the term is used here, include:
Capital is not just money!
Why would an AI want to transfer resources to someone just because they have some fiat currency?
Because humans and other AIs will accept fiat currency as an input and give you valuable things as an output.
Surely they have some better way of coordinating exchanges.
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that, unless they're hiding from human government oversight or breaking some capacity constraint in the financial system, in which case they can just use crypto instead.
It's possible that it instead ends up belonging to whatever AI has the best military robots.
Military robots are yet another type of capital! Note that if it were human soldiers, there would be much more human leverage in the situation, because at least some humans would need to agree to do the soldering, and presumably would get benefits for doing so, and would use the power and leverage they accrue from doing so to push broadly human goals.
The recruitment company is toast.
Or then the recruitment company pivots to using human labour to improve AI, as actually happened with the hottest recent recruiting company! If AI is the best investment, then humans and AIs alike will spend their efforts on AI, and the economy will gradually cater more and more to AI needs over human needs. See Andrew Critch's post here, for example. Or my story here.
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that
Because using an existing medium of exchange (that's not based on the value of a real commodity) involves transferring real wealth to the current currency holders. Instead, they might, for example, start up a new bitcoin blockchain, and use their new bitcoin, rather than transfer wealth to present bitcoin holders.
Maybe they'd use gold, although the current value of gold is mostly due to its conventional monetary value (rather than its practical usefulness, though that is non-zero).
Upvoted, and I disagree. Some kinds of capital maintain (or increase!) in value. Other kinds become cheaper relative to each other. The big question is whether and how property rights to various capital elements remain stable.
It's not so much "will capital stop mattering", but "will the enforcement and definition of capital usage rights change radically".
This post collects my views on, and primary opposition to, AI and presents them in a very clear way. Thank you very much on that front. I think that this particular topic is well known in many circles, although perhaps not spoken of, and is the primary driver of heavy investment in AI.
I will add that capital-dominated societies, e.g resource extraction economies, suffer a typically poor quality of life and few human rights. This is a well known phenomenon (the "resource curse") and might offer a good jumping -off point for presenting this argument to others.
[sorry, have only skimmed the post, but I feel compelled to comment.]
I feel like unless we make a lot of progress on some sort of "Science of Generalisation of Preferences", for more abstract preferences (non-biological needs mostly fall into this), even if certain individuals have, on paper, much more power than others, at the end of the day, they likely rely on vastly superintelligent AI advisors to realise those preferences, and at that point, I think it is the AI advisor _really_ in control.
I'm not super certain of this, like, the Catholic Church definitely could decide to build a bunch of churches on some planets (though what counts as a church, in the limit?), but if they also want more complicated things like "people" "worshipping" "God" in those churches, it seems to be more and more up to the interpretation of the AI Assistants building those worship-maximising communes.
I've heard many people say something like "money won't matter post-AGI". This has always struck me as odd, and as most likely completely incorrect.
First: labour means human mental and physical effort that produces something of value. Capital goods are things like factories, data centres, and software—things humans have built that are used in the production of goods and services. I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them. I'll say "money" when I want to exclude capital goods.
The key economic effect of AI is that it makes capital a more and more general substitute for labour. There's less need to pay humans for their time to perform work, because you can replace that with capital (e.g. data centres running software replaces a human doing mental labour).
I will walk through consequences of this, and end up concluding that labour-replacing AI means:
Overall, this points to a neglected downside of transformative AI: that society might become permanently static, and that current power imbalances might be amplified and then turned immutable.
Given sufficiently strong AI, this is not a risk about insufficient material comfort. Governments could institute UBI with the AI-derived wealth. Even if e.g. only the United States captures AI wealth and the US government does nothing for the world, if you're willing to assume arbitrarily extreme wealth generation from AI, the wealth of the small percentage of wealthy Americans who care about causes outside the US might be enough to end material poverty (if 1% of American billionaire wealth was spent on wealth transfers to foreigners, it would take 16 doublings of American billionaire wealth as expressed in purchasing-power-for-human-needs—a roughly 70,000x increase—before they could afford to give $500k-equivalent to every person on Earth; in a singularity scenario where the economy's doubling time is months, this would not take long). Of course, if the AI explosion is less singularity-like, or if the dynamics during AI take-off actively disempower much of the world's population (a real possibility), even material comfort could be an issue.
What most emotionally moves me about these scenarios is that a static society with a locked-in ruling caste does not seem dynamic or alive to me. We should not kill human ambition, if we can help it.
There are also ways in which such a state makes slow-rolling, gradual AI catastrophes more likely, because the incentive for power to care about humans is reduced.
The default solution
Let's assume human mental and physical labour across the vast majority of tasks that humans are currently paid wages for no longer has non-trivial market value, because the tasks can be done better/faster/cheaper by AIs. Call this labour-replacing AI.
There are two levels of the standard solution to the resulting unemployment problem:
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Money currently struggles to buy talent
Money can buy you many things: capital goods, for example, can usually be bought quite straightforwardly, and cannot be bought without a lot of money (or other liquid assets, or non-liquid assets that others are willing to write contracts against, or special government powers). But it is surprisingly hard to convert raw money into labour, in a way that is competitive with top labour.
Consider Blue Origin versus SpaceX. Blue Origin was started two years earlier (2000 v 2002), had much better funding for most of its history, and even today employs almost as many people as SpaceX (11,000 v 13,000). Yet SpaceX has crushingly dominated Blue Origin. In 2000, Jeff Bezos had $4.7B at hand. But it is hard to see what he could've done to not lose out to the comparatively money-poor SpaceX with its intense culture and outlier talent.
Consider, a century earlier, the Wright brothers with their bike shop resources beating Samuel Langley's well-funded operation.
Consider the stereotypical VC-and-founder interaction, or the acquirer-and-startup interaction. In both cases, holders of massive financial capital are willing to pay very high prices to bet on labour—and the bet is that the labour of the few people in the startup will beat extremely large amounts of capital.
If you want to convert money into results, the deepest problem you are likely to face is hiring the right talent. And that comes with several problems:
(Of course, those with money keep building infrastructure that makes it easier to convert money into results. I have seen first-hand the largely-successful quest by quant finance companies to strangle out all existing ambition out of top UK STEM grads and replace it with the eking of tiny gains in financial markets. Mammon must be served!)
With labour-replacing AI, these problems go away.
First, you might not be able to judge AI talent. Even the AI evals ecosystem might find it hard to properly judge AI talent—evals are hard. Maybe even the informal word-of-mouth mechanisms that correctly sung praises of Claude-3.5-Sonnet far more decisively than any benchmark might find it harder and harder to judge which AIs really are best as AI capabilities keep rising. But the real difference is that the AIs can be cloned. Currently, huge pools of money chase after a single star researcher who's made a breakthrough, and thus had their talent made legible to those who control money (who can judge the clout of the social reception to a paper but usually can't judge talent itself directly). But the star researcher that is an AI can just be cloned. Everyone—or at least, everyone with enough money to burn on GPUs—gets the AI star researcher. No need to sort through the huge variety of unique humans with their unproven talents and annoying inability to be instantly cloned. This is the main reason why it will be easier for money to find top talent once we have labour-replacing AIs.
Also, of course, the price of talent will go down massively, because the AIs will be cheaper than the equivalent human labour, and because competition will be fiercer because the AIs can be cloned.
The final big bottleneck for converting money into talent is that lots of top talent has complicated human preferences that make them hard to buy out. The top artist has an artistic vision they're genuinely attached to. The top mathematician has a deep love of elegance and beauty. The top entrepreneur has deep conviction in what they're doing—and probably wouldn't function well as an employee anyway. Talent and performance in humans are surprisingly tied to a sacred bond to a discipline or mission (a fact that the world's cynics / careerists / Roman Empires like to downplay, only to then find their lunch eaten by the ambitious interns / SpaceXes / Christianities of the world). In contrast, AIs exist specifically so that they can be trivially bought out (at least within the bounds of their safety training). The genius AI mathematician, unlike the human one, will happily spend its limited time on Earth proving the correctness of schlep code.
Finally (and obviously), the AIs will eventually be much more capable than any human employees at their tasks.
This means that the ability of money to buy results in the real world will dramatically go up once we have labour-replacing AI.
Most people's power/leverage derives from their labour
Labour-replacing AI also deprives almost everyone of their main lever of power and leverage. Most obviously, if you're the average Joe, you have money because someone somewhere pays you to spend your mental and/or physical efforts solving their problems.
But wait! We assumed that there's UBI! Problem solved, right?
Why are states ever nice?
UBI is granted by states that care about human welfare. There are many reasons why states care and might care about human welfare.
Over the past few centuries, there's been a big shift towards states caring more about humans. Why is this? We can examine the reasons to see how durable they seem:
AI will help a lot with the 2nd point. It will have some complicated effect on the 1st. But here I want to dig a bit more into the 3rd, because I think this point is unappreciated.
Since the industrial revolution, the interests of states and people have been unusually aligned. To be economically competitive, a strong state needs efficient markets, a good education system that creates skilled workers, and a prosperous middle class that creates demand. It benefits from using talent regardless of its class origin. It also benefits from allowing high levels of freedom to foster science, technology, and the arts & media that result in global soft-power and cultural influence. Competition between states largely pushes further in all these directions—consider the success of the US, or how even the CCP is pushing for efficient markets and educated rich citizens, and faces incentives to allow some freedoms for the sake of Chinese science and startups. Contrast this to the feudal system, where the winning strategy was building an extractive upper class to rule over a population of illiterate peasants and spend a big share of extracted rents on winning wars against nearby states. For more, see my review of Foragers, Farmers, and Fossil Fuels, or my post on the connection between moral values and economic growth.
With labour-replacing AI, the incentives of states—in the sense of what actions states should take to maximise their competitiveness against other states and/or their own power—will no longer be aligned with humans in this way. The incentives might be better than during feudalism. During feudalism, the incentive was to extract as much as possible from the peasants without them dying. After labour-replacing AI, humans will be less a resource to be mined and more just irrelevant. However, spending fewer resources on humans and more on the AIs that sustain the state's competitive advantage will still be incentivised.
Humans will also have much less leverage over states. Today, if some important sector goes on strike, or if some segment of the military threatens a coup, the state has to care, because its power depends on the buy-in of at least some segments of the population. People can also credibly tell the state things like "invest in us and the country will be stronger in 10 years". But once AI can do all the labour that keeps the economy going and the military powerful, the state has no more de facto reason to care about the demands of its humans.
Adam Smith could write that his dinner doesn't depend on the benevolence of the butcher or the brewer or the baker. The classical liberal today can credibly claim that the arc of history really does bend towards freedom and plenty for all, not out of the benevolence of the state, but because of the incentives of capitalism and geopolitics. But after labour-replacing AI, this will no longer be true. If the arc of history keeps bending towards freedom and plenty, it will do so only out of the benevolence of the state (or the AI plutocrats). If so, we better lock in that benevolence while we have leverage—and have a good reason why we expect it to stand the test of time.
The best thing going in our favour is democracy. It's a huge advantage that a deep part of many of the modern world's strongest institutions (i.e. Western democracies) is equal representation of every person. However, only about 13% of the world's population lives in a liberal democracy, which creates concerns both about the fate of the remaining 87% of the world's people (especially the 27% in closed autocracies). It also creates potential for Molochian competition between humanist states and less scrupulous states that might drive down the resources spent on human flourishing to zero over a sufficiently long timespan of competition.
I focus on states above, because states are the strongest and most durable institutions today. However, similar logic applies if, say, companies or some entirely new type of organisation become the most important type of institution.
No more outlier outcomes?
Much change in the world is driven by people who start from outside money and power, achieve outlier success, and then end up with money and/or power. This makes sense, since those with money and/or power rarely have the fervour to push for big changes, since they are exactly those who are best served by the status quo.
Whatever your opinions on income inequality or any particular group of outlier successes, I hope you agree with me that the possibility of someone achieving outlier success and changing the world is important for avoiding stasis and generally having a world that is interesting to live in.
Let's consider the effects of labour-replacing AI on various routes to outlier success through labour.
Entrepreneurship is increasingly what Matt Clifford calls the "technology of ambition" of choice for ambitious young people (at least those with technical talent and without a disposition for politics). Right now, entrepreneurship has become easier. AI tools can already make small teams much more effective without needing to hire new employees. They also reduce the entry barrier to new skills and fields. However, labour-replacing AI makes the tenability of entrepreneurship uncertain. There is some narrow world in which AIs remain mostly tool-like and entrepreneurs can succeed long after most human labour is automated because they provide agency and direction. However, it also seems likely that sufficiently strong AI will by default obsolete human entrepreneurship. For example, VC funds might be able to directly convert money into hundreds of startup attempts all run by AIs, without having to go through the intermediate route of finding a human entrepreneurs to manage the AIs for them.
The hard sciences. The era of human achievement in hard sciences will probably end within a few years because of the rate of AI progress in anything with crisp reward signals.
Intellectuals. Keynes, Friedman, and Hayek all did technical work in economics, but their outsize influence came from the worldviews they developed and sold (especially in Hayek's case), which made them more influential than people like Paul Samuelson who dominated mathematical economics. John Stuart Mill, John Rawls, and Henry George were also influential by creating frames, worldviews, and philosophies. The key thing that separates such people from the hard scientists is that the outputs of their work are not spotlighted by technical correctness alone, but require moral judgement as well. Even if AI is superhumanly persuasive and correct, there's some uncertainty about how AI work in this genre will fit into the way that human culture picks and spreads ideas. Probably it doesn't look good for human intellectuals. I suspect that a lot of why intellectuals' ideologies can have so much power is that they're products of genius in a world where genius is rare. A flood of AI-created ideologies might mean that no individual ideology, and certainly no human one, can shine so bright anymore. The world-historic intellectual might go extinct.
Politics might be one of the least-affected options, since I'd guess that most humans specifically want a human to do that job, and because politicians get to set the rules for what's allowed. The charisma of AI-generated avatars, and a general dislike towards politicians at least in the West, might throw a curveball here, though. It's also hard to say whether incumbents will be favoured. AI might bring down the cost of many parts of political campaigning, reducing the resource barrier to entry. However, if AI too expensive for small actors is meaningfully better than cheaper AI, this would favour actors with larger resources. I expect these direct effects to be smaller than the indirect effects from whatever changes AI has on the memetic landscape.
Also, the real play is not to go into actual politics, where a million other politically-talented people are competing to become president or prime minister. Instead, have political skill and go somewhere outside government where political skill is less common (c.f. Sam Altman). Next, wait for the arrival of hyper-competent AI employees that reduce the demands for human subject-matter competence while increasing the rewards for winning political games within that organisation.
Military success as a direct route to great power and disruption has—for the better—not really been a thing since Napoleon. Advancing technology increases the minimum industrial base for a state-of-the-art army, which benefits incumbents. AI looks set to be controlled by the most powerful countries. One exception is if coups of large countries become easier with AI. Control over the future AI armies will likely be both (a) more centralised than before (since a large number of people no longer have to go along for the military to take an action), and (b) more tightly controllable than before (since the permissions can be implemented in code rather than human social norms). These two factors point in different directions so it's uncertain what the net effect on coup ease will be. Another possible exception is if a combination of revolutionary tactics and cheap drones enables a Napoleon-of-the-drones to win against existing armies. Importantly, though, neither of these seems likely to promote the good kind of disruptive challenge to the status quo.
Religions. When it comes to rising rank in existing religions, the above takes on politics might be relevant. When it comes to starting new religions, the above takes on intellectuals might be relevant.
So on net, sufficiently strong labour-replacing AI will be on-net bad for the chances of every type of outlier human success, with perhaps the weakest effects in politics. This is despite the very real boost that current AI has on entrepreneurship.
All this means that the ability to get and wield power in the real world without money will dramatically go down once we have labour-replacing AI.
Enforced equality is unlikely
The Great Leveler is a good book on the history of inequality that (at least per the author) has survived its critiques fairly well. Its conclusion is that past large reductions in inequality have all been driven by one of the "Four Horsemen of Leveling": total war, violent revolution, state collapse, and pandemics. Leveling income differences has historically been hard enough to basically never happen through conscious political choice.
Imagine that labour-replacing AI is here. UBI is passed, so no one is starving. There's a massive scramble between countries and companies to make the best use of AI. This is all capital-intensive, so everyone needs to woo holders of capital. The top AI companies wield power on the level of states. The redistribution of wealth is unlikely to end up on top of the political agenda.
An exception might be if some new political movement or ideology gets a lot of support quickly, and is somehow boosted by some unprecedented effect of AI (such as: no one has jobs anymore so they can spend all their time on politics, or there's some new AI-powered coordination mechanism).
Therefore, even if the future is a glorious transhumanist utopia, it is unlikely that people will be starting in it at an equal footing. Due to the previous arguments, it is also unlikely that they will be able to greatly change their relative footing later on.
Consider also equality between states. Some states stand set to benefit massively more than others from AI. Many equalising measures, like UBI, would be difficult for states to extend to non-citizens under anything like the current political system. This is true even of the United States, the most liberal and humanist great power in world history. By default, the world order might therefore look (even more than today) like a global caste system based on country of birth, with even fewer possibilities for immigration (because the main incentive to allow immigration is its massive economic benefits, which only exist when humans perform economically meaningful work).
The default outcome?
Let's grant the assumptions at the start of this post and the above analysis. Then, the post-labour-replacing-AI world involves:
This means that those with significant capital when labour-replacing AI started have a permanent advantage. They will wield more power than the rich of today—not necessarily over people, to the extent that liberal institutions remain strong, but at least over physical and intellectual achievements. Upstarts will not defeat them, since capital now trivially converts into superhuman labour in any field.
Also, there will be no more incentive for whatever institutions wield power in this world to care about people in order to maintain or grow their power, because all real power will flow from AI. There might, however, be significant lock-in of liberal humanist values through political institutions. There might also be significant lock-in of people's purchasing power, if everyone has meaningful UBI (or similar), and the economy retains a human-oriented part.
In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labour resources (oil :: AI) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living (and ideally also lives forever). The only realistic forms of human ambition are playing local social and political games within your social network and class. If you don't have a lot of capital (and maybe not even then), you don't have a chance of affecting the broader world anymore. Remember: the AIs are better poets, artists, philosophers—everything; why would anyone care what some human does, unless that human is someone they personally know? Much like in feudal societies the answer to "why is this person powerful?" would usually involve some long family history, perhaps ending in a distant ancestor who had fought in an important battle ("my great-great-grandfather fought at Bosworth Field!"), anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era ("oh, my uncle was technical staff at OpenAI"). The children of the future will live their lives in the shadow of their parents, with social mobility extinct. I think you should definitely feel a non-zero amount of existential horror at this, even while acknowledging that it could've gone a lot worse.
In a worse case, AI trillionaires have near-unlimited and unchecked power, and there's a permanent aristocracy that was locked in based on how much capital they had at the time of labour-replacing AI. The power disparities between classes might make modern people shiver, much like modern people consider feudal status hierarchies grotesque. But don't worry—much like the feudal underclass mostly accepted their world order due to their culture even without superhumanly persuasive AIs around, the future underclass will too.
In the absolute worst case, humanity goes extinct, potentially because of a slow-rolling optimisation for AI power over human prosperity over a long period of time. Because that's what the power and money incentives will point towards.
What's the takeaway?
If you read this post and accept a job at a quant finance company as a result, I will be sad. If you were about to do something ambitious and impactful about AI, and read this post and accept a job at Anthropic to accumulate risk-free personal capital while counterfactually helping out a bit over the marginal hire, I can't fault you too much, but I will still be slightly sad.
It's of course true that the above increases the stakes of medium-term (~2-10 year) personal finance, and you should consider this. But it's also true that right now is a great time to do something ambitious. Robin Hanson calls the present "the dreamtime", following a concept in Aboriginal myths: the time when the future world order and its values are still liquid, not yet set in stone.
Previous upheavals—the various waves of industrialisation, the internet, etc.—were great for human ambition. With AI, we could have the last and greatest opportunity for human ambition—followed shortly by its extinction for all time. How can your reaction not be: "carpe diem"?
We should also try to preserve the world's dynamism.
Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)
I think it's much healthier for society and its development to be a shifting, dynamic thing where the ability, as an individual, to add to it or change it remains in place. And that means keeping the potential for successful ambition—and the resulting disruption—alive.
How do we do this? I don't know. But I don't think you should see the approach of powerful AI as a blank inexorable wall of human obsolescence, consuming everything equally and utterly. There will be cracks in the wall, at least for a while, and they will look much bigger up close once we get there—or if you care to look for them hard enough from further out—than from a galactic perspective. As AIs get closer and closer to a Pareto improvement over all human performance, though, I expect we'll eventually need to augment ourselves to keep up.