Edited to add: The main takeaway of this post is meant to be: Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched. Many people are reading this post in a way where either (a) "capital" means just "money" (rather than also including physical capital like factories and data centres), or (b) the main concern is human-human inequality (rather than broader societal concerns about humanity's collective position, the potential for social change, and human agency).

I've heard many people say something like "money won't matter post-AGI". This has always struck me as odd, and as most likely completely incorrect.

First: labour means human mental and physical effort that produces something of value. Capital goods are things like factories, data centres, and software—things humans have built that are used in the production of goods and services. I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them. I'll say "money" when I want to exclude capital goods.

The key economic effect of AI is that it makes capital a more and more general substitute for labour. There's less need to pay humans for their time to perform work, because you can replace that with capital (e.g. data centres running software replaces a human doing mental labour).

I will walk through consequences of this, and end up concluding that labour-replacing AI means:

  1. The ability to buy results in the real world will dramatically go up
  2. Human ability to wield power in the real world will dramatically go down (at least without money); including because:
    1. there will be no more incentive for states, companies, or other institutions to care about humans
    2. it will be harder for humans to achieve outlier outcomes relative to their starting resources
  3. Radical equalising measures are unlikely

Overall, this points to a neglected downside of transformative AI: that society might become permanently static, and that current power imbalances might be amplified and then turned immutable.

Given sufficiently strong AI, this is not a risk about insufficient material comfort. Governments could institute UBI with the AI-derived wealth. Even if e.g. only the United States captures AI wealth and the US government does nothing for the world, if you're willing to assume arbitrarily extreme wealth generation from AI, the wealth of the small percentage of wealthy Americans who care about causes outside the US might be enough to end material poverty (if 1% of American billionaire wealth was spent on wealth transfers to foreigners, it would take 16 doublings of American billionaire wealth as expressed in purchasing-power-for-human-needs—a roughly 70,000x increase—before they could afford to give $500k-equivalent to every person on Earth; in a singularity scenario where the economy's doubling time is months, this would not take long). Of course, if the AI explosion is less singularity-like, or if the dynamics during AI take-off actively disempower much of the world's population (a real possibility), even material comfort could be an issue.

What most emotionally moves me about these scenarios is that a static society with a locked-in ruling caste does not seem dynamic or alive to me. We should not kill human ambition, if we can help it.

There are also ways in which such a state makes slow-rolling, gradual AI catastrophes more likely, because the incentive for power to care about humans is reduced.

The default solution

Let's assume human mental and physical labour across the vast majority of tasks that humans are currently paid wages for no longer has non-trivial market value, because the tasks can be done better/faster/cheaper by AIs. Call this labour-replacing AI.

There are two levels of the standard solution to the resulting unemployment problem:

  1. Governments will adopt something universal basic income (UBI).
  2. We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.

Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.

Money currently struggles to buy talent

Money can buy you many things: capital goods, for example, can usually be bought quite straightforwardly, and cannot be bought without a lot of money (or other liquid assets, or non-liquid assets that others are willing to write contracts against, or special government powers). But it is surprisingly hard to convert raw money into labour, in a way that is competitive with top labour.

Consider Blue Origin versus SpaceX. Blue Origin was started two years earlier (2000 v 2002), had much better funding for most of its history, and even today employs almost as many people as SpaceX (11,000 v 13,000). Yet SpaceX has crushingly dominated Blue Origin. In 2000, Jeff Bezos had $4.7B at hand. But it is hard to see what he could've done to not lose out to the comparatively money-poor SpaceX with its intense culture and outlier talent.

Consider, a century earlier, the Wright brothers with their bike shop resources beating Samuel Langley's well-funded operation.

Consider the stereotypical VC-and-founder interaction, or the acquirer-and-startup interaction. In both cases, holders of massive financial capital are willing to pay very high prices to bet on labour—and the bet is that the labour of the few people in the startup will beat extremely large amounts of capital.

If you want to convert money into results, the deepest problem you are likely to face is hiring the right talent. And that comes with several problems:

  1. It's often hard to judge talent, unless you yourself have considerable talent in the same domain. Therefore, if you try to find talent, you will often miss.
  2. Talent is rare (and credentialed talent even more so—and many actors can't afford to rely on any other kind, because of point 1), so there's just not very much of it going around.
  3. Even if you can locate the top talent, the top talent tends to be less amenable to being bought out by money than others.

(Of course, those with money keep building infrastructure that makes it easier to convert money into results. I have seen first-hand the largely-successful quest by quant finance companies to strangle out all existing ambition out of top UK STEM grads and replace it with the eking of tiny gains in financial markets. Mammon must be served!)

With labour-replacing AI, these problems go away.

First, you might not be able to judge AI talent. Even the AI evals ecosystem might find it hard to properly judge AI talent—evals are hard. Maybe even the informal word-of-mouth mechanisms that correctly sung praises of Claude-3.5-Sonnet far more decisively than any benchmark might find it harder and harder to judge which AIs really are best as AI capabilities keep rising. But the real difference is that the AIs can be cloned. Currently, huge pools of money chase after a single star researcher who's made a breakthrough, and thus had their talent made legible to those who control money (who can judge the clout of the social reception to a paper but usually can't judge talent itself directly). But the star researcher that is an AI can just be cloned. Everyone—or at least, everyone with enough money to burn on GPUs—gets the AI star researcher. No need to sort through the huge variety of unique humans with their unproven talents and annoying inability to be instantly cloned. This is the main reason why it will be easier for money to find top talent once we have labour-replacing AIs.

Also, of course, the price of talent will go down massively, because the AIs will be cheaper than the equivalent human labour, and because competition will be fiercer because the AIs can be cloned.

The final big bottleneck for converting money into talent is that lots of top talent has complicated human preferences that make them hard to buy out. The top artist has an artistic vision they're genuinely attached to. The top mathematician has a deep love of elegance and beauty. The top entrepreneur has deep conviction in what they're doing—and probably wouldn't function well as an employee anyway. Talent and performance in humans are surprisingly tied to a sacred bond to a discipline or mission (a fact that the world's cynics / careerists / Roman Empires like to downplay, only to then find their lunch eaten by the ambitious interns / SpaceXes / Christianities of the world). In contrast, AIs exist specifically so that they can be trivially bought out (at least within the bounds of their safety training). The genius AI mathematician, unlike the human one, will happily spend its limited time on Earth proving the correctness of schlep code.

Finally (and obviously), the AIs will eventually be much more capable than any human employees at their tasks.

This means that the ability of money to buy results in the real world will dramatically go up once we have labour-replacing AI.

Most people's power/leverage derives from their labour

Labour-replacing AI also deprives almost everyone of their main lever of power and leverage. Most obviously, if you're the average Joe, you have money because someone somewhere pays you to spend your mental and/or physical efforts solving their problems.

But wait! We assumed that there's UBI! Problem solved, right?

Why are states ever nice?

UBI is granted by states that care about human welfare. There are many reasons why states care and might care about human welfare.

Over the past few centuries, there's been a big shift towards states caring more about humans. Why is this? We can examine the reasons to see how durable they seem:

  1. Moral changes downstream of the Enlightenment, in particular an increased centering of liberalism and individualism.
  2. Affluence & technology. Pre-industrial societies were mostly so poor that significant efforts to help the poor would've bankrupted them. Many types of help (such as effective medical care) are also only possible because of new technology.
  3. Incentives for states to care about freedom, prosperity, and education.

AI will help a lot with the 2nd point. It will have some complicated effect on the 1st. But here I want to dig a bit more into the 3rd, because I think this point is unappreciated.

Since the industrial revolution, the interests of states and people have been unusually aligned. To be economically competitive, a strong state needs efficient markets, a good education system that creates skilled workers, and a prosperous middle class that creates demand. It benefits from using talent regardless of its class origin. It also benefits from allowing high levels of freedom to foster science, technology, and the arts & media that result in global soft-power and cultural influence. Competition between states largely pushes further in all these directions—consider the success of the US, or how even the CCP is pushing for efficient markets and educated rich citizens, and faces incentives to allow some freedoms for the sake of Chinese science and startups. Contrast this to the feudal system, where the winning strategy was building an extractive upper class to rule over a population of illiterate peasants and spend a big share of extracted rents on winning wars against nearby states. For more, see my review of Foragers, Farmers, and Fossil Fuels, or my post on the connection between moral values and economic growth.

With labour-replacing AI, the incentives of states—in the sense of what actions states should take to maximise their competitiveness against other states and/or their own power—will no longer be aligned with humans in this way. The incentives might be better than during feudalism. During feudalism, the incentive was to extract as much as possible from the peasants without them dying. After labour-replacing AI, humans will be less a resource to be mined and more just irrelevant. However, spending fewer resources on humans and more on the AIs that sustain the state's competitive advantage will still be incentivised.

Humans will also have much less leverage over states. Today, if some important sector goes on strike, or if some segment of the military threatens a coup, the state has to care, because its power depends on the buy-in of at least some segments of the population. People can also credibly tell the state things like "invest in us and the country will be stronger in 10 years". But once AI can do all the labour that keeps the economy going and the military powerful, the state has no more de facto reason to care about the demands of its humans.

Adam Smith could write that his dinner doesn't depend on the benevolence of the butcher or the brewer or the baker. The classical liberal today can credibly claim that the arc of history really does bend towards freedom and plenty for all, not out of the benevolence of the state, but because of the incentives of capitalism and geopolitics. But after labour-replacing AI, this will no longer be true. If the arc of history keeps bending towards freedom and plenty, it will do so only out of the benevolence of the state (or the AI plutocrats). If so, we better lock in that benevolence while we have leverage—and have a good reason why we expect it to stand the test of time.

The best thing going in our favour is democracy. It's a huge advantage that a deep part of many of the modern world's strongest institutions (i.e. Western democracies) is equal representation of every person. However, only about 13% of the world's population lives in a liberal democracy, which creates concerns both about the fate of the remaining 87% of the world's people (especially the 27% in closed autocracies). It also creates potential for Molochian competition between humanist states and less scrupulous states that might drive down the resources spent on human flourishing to zero over a sufficiently long timespan of competition.

I focus on states above, because states are the strongest and most durable institutions today. However, similar logic applies if, say, companies or some entirely new type of organisation become the most important type of institution.

No more outlier outcomes?

Much change in the world is driven by people who start from outside money and power, achieve outlier success, and then end up with money and/or power. This makes sense, since those with money and/or power rarely have the fervour to push for big changes, since they are exactly those who are best served by the status quo.

Whatever your opinions on income inequality or any particular group of outlier successes, I hope you agree with me that the possibility of someone achieving outlier success and changing the world is important for avoiding stasis and generally having a world that is interesting to live in.

Let's consider the effects of labour-replacing AI on various routes to outlier success through labour.

Entrepreneurship is increasingly what Matt Clifford calls the "technology of ambition" of choice for ambitious young people (at least those with technical talent and without a disposition for politics). Right now, entrepreneurship has become easier. AI tools can already make small teams much more effective without needing to hire new employees. They also reduce the entry barrier to new skills and fields. However, labour-replacing AI makes the tenability of entrepreneurship uncertain. There is some narrow world in which AIs remain mostly tool-like and entrepreneurs can succeed long after most human labour is automated because they provide agency and direction. However, it also seems likely that sufficiently strong AI will by default obsolete human entrepreneurship. For example, VC funds might be able to directly convert money into hundreds of startup attempts all run by AIs, without having to go through the intermediate route of finding a human entrepreneurs to manage the AIs for them.

The hard sciences. The era of human achievement in hard sciences will probably end within a few years because of the rate of AI progress in anything with crisp reward signals.

Intellectuals. Keynes, Friedman, and Hayek all did technical work in economics, but their outsize influence came from the worldviews they developed and sold (especially in Hayek's case), which made them more influential than people like Paul Samuelson who dominated mathematical economics. John Stuart Mill, John Rawls, and Henry George were also influential by creating frames, worldviews, and philosophies. The key thing that separates such people from the hard scientists is that the outputs of their work are not spotlighted by technical correctness alone, but require moral judgement as well. Even if AI is superhumanly persuasive and correct, there's some uncertainty about how AI work in this genre will fit into the way that human culture picks and spreads ideas. Probably it doesn't look good for human intellectuals. I suspect that a lot of why intellectuals' ideologies can have so much power is that they're products of genius in a world where genius is rare. A flood of AI-created ideologies might mean that no individual ideology, and certainly no human one, can shine so bright anymore. The world-historic intellectual might go extinct.

Politics might be one of the least-affected options, since I'd guess that most humans specifically want a human to do that job, and because politicians get to set the rules for what's allowed. The charisma of AI-generated avatars, and a general dislike towards politicians at least in the West, might throw a curveball here, though. It's also hard to say whether incumbents will be favoured. AI might bring down the cost of many parts of political campaigning, reducing the resource barrier to entry. However, if AI too expensive for small actors is meaningfully better than cheaper AI, this would favour actors with larger resources. I expect these direct effects to be smaller than the indirect effects from whatever changes AI has on the memetic landscape.

Also, the real play is not to go into actual politics, where a million other politically-talented people are competing to become president or prime minister. Instead, have political skill and go somewhere outside government where political skill is less common (c.f. Sam Altman). Next, wait for the arrival of hyper-competent AI employees that reduce the demands for human subject-matter competence while increasing the rewards for winning political games within that organisation.

Military success as a direct route to great power and disruption has—for the better—not really been a thing since Napoleon. Advancing technology increases the minimum industrial base for a state-of-the-art army, which benefits incumbents. AI looks set to be controlled by the most powerful countries. One exception is if coups of large countries become easier with AI. Control over the future AI armies will likely be both (a) more centralised than before (since a large number of people no longer have to go along for the military to take an action), and (b) more tightly controllable than before (since the permissions can be implemented in code rather than human social norms). These two factors point in different directions so it's uncertain what the net effect on coup ease will be. Another possible exception is if a combination of revolutionary tactics and cheap drones enables a Napoleon-of-the-drones to win against existing armies. Importantly, though, neither of these seems likely to promote the good kind of disruptive challenge to the status quo.

Religions. When it comes to rising rank in existing religions, the above takes on politics might be relevant. When it comes to starting new religions, the above takes on intellectuals might be relevant.

So on net, sufficiently strong labour-replacing AI will be on-net bad for the chances of every type of outlier human success, with perhaps the weakest effects in politics. This is despite the very real boost that current AI has on entrepreneurship.

All this means that the ability to get and wield power in the real world without money will dramatically go down once we have labour-replacing AI.

Enforced equality is unlikely

The Great Leveler is a good book on the history of inequality that (at least per the author) has survived its critiques fairly well. Its conclusion is that past large reductions in inequality have all been driven by one of the "Four Horsemen of Leveling": total war, violent revolution, state collapse, and pandemics. Leveling income differences has historically been hard enough to basically never happen through conscious political choice.

Imagine that labour-replacing AI is here. UBI is passed, so no one is starving. There's a massive scramble between countries and companies to make the best use of AI. This is all capital-intensive, so everyone needs to woo holders of capital. The top AI companies wield power on the level of states. The redistribution of wealth is unlikely to end up on top of the political agenda.

An exception might be if some new political movement or ideology gets a lot of support quickly, and is somehow boosted by some unprecedented effect of AI (such as: no one has jobs anymore so they can spend all their time on politics, or there's some new AI-powered coordination mechanism).

Therefore, even if the future is a glorious transhumanist utopia, it is unlikely that people will be starting in it at an equal footing. Due to the previous arguments, it is also unlikely that they will be able to greatly change their relative footing later on.

Consider also equality between states. Some states stand set to benefit massively more than others from AI. Many equalising measures, like UBI, would be difficult for states to extend to non-citizens under anything like the current political system. This is true even of the United States, the most liberal and humanist great power in world history. By default, the world order might therefore look (even more than today) like a global caste system based on country of birth, with even fewer possibilities for immigration (because the main incentive to allow immigration is its massive economic benefits, which only exist when humans perform economically meaningful work).

The default outcome?

Let's grant the assumptions at the start of this post and the above analysis. Then, the post-labour-replacing-AI world involves:

  • Money will be able to buy results in the real world better than ever.
  • People's labour gives them less leverage than ever before.
  • Achieving outlier success through your labour in most or all areas is now impossible.
  • There was no transformative leveling of capital, either within or between countries.

This means that those with significant capital when labour-replacing AI started have a permanent advantage. They will wield more power than the rich of today—not necessarily over people, to the extent that liberal institutions remain strong, but at least over physical and intellectual achievements. Upstarts will not defeat them, since capital now trivially converts into superhuman labour in any field.

Also, there will be no more incentive for whatever institutions wield power in this world to care about people in order to maintain or grow their power, because all real power will flow from AI. There might, however, be significant lock-in of liberal humanist values through political institutions. There might also be significant lock-in of people's purchasing power, if everyone has meaningful UBI (or similar), and the economy retains a human-oriented part.

In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labour resources (oil :: AI) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living (and ideally also lives forever). The only realistic forms of human ambition are playing local social and political games within your social network and class. If you don't have a lot of capital (and maybe not even then), you don't have a chance of affecting the broader world anymore. Remember: the AIs are better poets, artists, philosophers—everything; why would anyone care what some human does, unless that human is someone they personally know? Much like in feudal societies the answer to "why is this person powerful?" would usually involve some long family history, perhaps ending in a distant ancestor who had fought in an important battle ("my great-great-grandfather fought at Bosworth Field!"), anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era ("oh, my uncle was technical staff at OpenAI"). The children of the future will live their lives in the shadow of their parents, with social mobility extinct. I think you should definitely feel a non-zero amount of existential horror at this, even while acknowledging that it could've gone a lot worse.

In a worse case, AI trillionaires have near-unlimited and unchecked power, and there's a permanent aristocracy that was locked in based on how much capital they had at the time of labour-replacing AI. The power disparities between classes might make modern people shiver, much like modern people consider feudal status hierarchies grotesque. But don't worry—much like the feudal underclass mostly accepted their world order due to their culture even without superhumanly persuasive AIs around, the future underclass will too.

In the absolute worst case, humanity goes extinct, potentially because of a slow-rolling optimisation for AI power over human prosperity over a long period of time. Because that's what the power and money incentives will point towards.

What's the takeaway?

If you read this post and accept a job at a quant finance company as a result, I will be sad. If you were about to do something ambitious and impactful about AI, and read this post and accept a job at Anthropic to accumulate risk-free personal capital while counterfactually helping out a bit over the marginal hire, I can't fault you too much, but I will still be slightly sad.

It's of course true that the above increases the stakes of medium-term (~2-10 year) personal finance, and you should consider this. But it's also true that right now is a great time to do something ambitious. Robin Hanson calls the present "the dreamtime", following a concept in Aboriginal myths: the time when the future world order and its values are still liquid, not yet set in stone.

Previous upheavals—the various waves of industrialisation, the internet, etc.—were great for human ambition. With AI, we could have the last and greatest opportunity for human ambition—followed shortly by its extinction for all time. How can your reaction not be: "carpe diem"?

We should also try to preserve the world's dynamism.

Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)

I think it's much healthier for society and its development to be a shifting, dynamic thing where the ability, as an individual, to add to it or change it remains in place. And that means keeping the potential for successful ambition—and the resulting disruption—alive.

How do we do this? I don't know. But I don't think you should see the approach of powerful AI as a blank inexorable wall of human obsolescence, consuming everything equally and utterly. There will be cracks in the wall, at least for a while, and they will look much bigger up close once we get there—or if you care to look for them hard enough from further out—than from a galactic perspective. As AIs get closer and closer to a Pareto improvement over all human performance, though, I expect we'll eventually need to augment ourselves to keep up.

New Comment
92 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

When people such as myself say "money won't matter post-AGI" the claim is NOT that the economy post-AGI won't involve money (though that might be true) but rather that the strategy of saving money in order to spend it after AGI is a bad strategy. Here are some reasons:

  1. The post-AGI economy might not involve money, it might be more of a command economy.
  2. Even if it involves money, the relationship between how much money someone has before and how much money they have after might not be anywhere close to 1:1. For example:
    1. Maybe the humans will lose control of the AGIs
    2. Maybe the humans who control the AGIs will put values into the AGIs, such that the resulting world redistributes the money, so to speak. E.g. maybe they'll tax and redistribute to create a more equal society -- OR (and you talk about this, but don't go far enough!) maybe they'll make a less equal society, one in which 'how much money you saved' doesn't translate into how much money you have in the new world, and instead e.g. being in the good graces of the leadership of the AGI project, as judged by their omnipresent AGI servants that infuse the economy and talk to everyone, is what matters.
    3. Maybe there'll be a war or somethi
... (read more)
Reply321111

I agree (1) and (2) are possibilities. However, from a personal planning pov, you should focus preparing for scenarios (i) that might last a long time (ii) where you can affect what happens, since that's where the stakes are.

Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability. (Edit: to be clear it does reduce the value of saving vs. spending, just don't think it's a big effect unless probabilities are high.)

I think (3) is the key way to push back. 

I feel unsure all my preferences are either (i) local and easily satisfied or (ii) impartial & altruistic. You only need to have one type of preference with, say, log returns to money that can be better satisfied post-AGI to make capital post-AGI valuable to you (emulations maybe).

But let's focus on the altruistic case – I'm very interested in the question of how valuable capital will be altruistically post-AGI.

I think your argument about relative neglectedness makes sense, but is maybe too strong.

There's 500 trillion of world wealth, so if you have $1m now, that's 2e-9 of world wealth. Through good investing through the transition, it seems like you can increase your s... (read more)

1wassname
I would say: unless you can change the probability. These can still be significant in your decision making, if you can invest time or money or effort to decrease the probability.

I think I agree with all of this.

(Except maybe I'd emphasise the command economy possibility slightly less. And compared to what I understand of your ranking, I'd rank competition between different AGIs/AGI-using factions as a relatively more important factor in determining what happens, and values put into AGIs as a relatively less important factor. I think these are both downstream of you expecting slightly-to-somewhat more singleton-like scenarios than I do?)

EDIT: see here for more detail on my take on Daniel's takes.

Overall, I'd emphasize as the main point in my post: AI-caused shifts in the incentives/leverage of human v non-human factors of production, and this mattering because the interests of power will become less aligned with humans while simultaneously power becomes more entrenched and effective. I'm not really interested in whether someone should save or not for AGI. I think starting off with "money won't matter post-AGI" was probably a confusing and misleading move on my part.

6Daniel Kokotajlo
OK, cool, thanks for clarifying. Seems we were talking past each other then, if you weren't trying to defend the strategy of saving money to spend after AGI. Cheers!
6Jacob Pfau
I see the command economy point as downstream of a broader trend: as technology accelerates, negative public externalities will increasingly scale and present irreversible threats (x-risks, but also more mundane pollution, errant bio-engineering plague risks etc.). If we condition on our continued existence, there must've been some solution to this which would look like either greater government intervention (command economy) or a radical upgrade to the coordination mechanisms in our capitalist system. Relevant to your power entrenchment claim: both of these outcomes involve the curtailment of power exerted by private individuals with large piles of capital. (Note there are certainly other possible reasons to expect a command economy, and I do not know which reasons were particularly compelling to Daniel)
4L Rudolf L
This seems very reasonable and likely correct (though not obvious) to me. I especially like your point about there being lots of competition in the "save it" strategy because it happens by default. Also note that my post explicitly encourages individuals to do ambitious things pre-AGI, rather than focus on safe capital accumulation.
2lc
#1 and #2 are serious concerns, but there's not really much I can do about them anyways. #3 doesn't make any sense to me. Right, and that seems like OP's point? Because I can do this, I shouldn't spend money on consumption goods today and in fact should gather as much money as I can now? Certainly massive stellar objects post-AGI will be more useful to me than a house is pre-agi? As to this: I guess I just don't really believe I have much control over that at all. Further, I can specifically invest in things likely to be important parts of the AGI production function, like semiconductors, etc.
4Daniel Kokotajlo
On the contrary, massive stellar objects post-AGI will be less useful to you than a house is today, as far as your selfish personal preferences are concerned. Consider the difference in your quality of life living in a nice house vs. skimping and saving 50% and living in a cheap apartment so you can save money. Next, consider the difference in your quality of life owning your own planet (replete with superintelligent servants) vs. owning merely half a planet. What can you do with a whole planet that you can't do with half a planet? Not that much. Re: 1 and 2: Whether you can do something about them matters but doesn't undermine my argument. You should still discount the value of your savings by their probability. However little control you have over influencing AGI development, you'll have orders of magnitude less control over influencing the cosmos / society / etc. after AGI.
[-]lc132

On the contrary, massive stellar objects post-AGI will be less useful to you than a house is today, as far as your selfish personal preferences are concerned. Consider the difference in your quality of life living in a nice house vs. skimping and saving 50% and living in a cheap apartment so you can save money. Next, consider the difference in your quality of life owning your own planet (replete with superintelligent servants) vs. owning merely half a planet. What can you do with a whole planet that you can't do with half a planet? Not that much.

It matters if it means I can live twice as long, because I can purchase more negentropy with which to maintain whatever lifestyle I have.

4Daniel Kokotajlo
Good point. If your utility is linear or close to linear in lifespan even at very large scales, and lifespan is based on how much money you have rather than e.g. a right guaranteed by the government, then a planetworth could be almost twice as valuable as half a planetworth.
2Daniel Kokotajlo
(My selfish utility is not close to linear in lifespan at very large scales, I think.)
2quila
I am confused by the existence of this discourse. Do its participants not believe strong superintelligence is possible? (edit: I misinterpreted Daniel's comment, I thought this quote indicated they thought it was non-trivially likely, instead of just being reasoning through an 'even if' scenario / scenario relevant in OP's model)
2Daniel Kokotajlo
Can you elaborate, I'm not sure what you are asking. I believe strong superintelligence is possible.
2quila
Why would strong superintelligence coexist with an economy? Wouldn't an aligned (or unaligned) superintelligence antiquate it all?
4L Rudolf L
Though yes, I agree that a superintelligent singleton controlling a command economy means this breaks down. However it seems far from clear we will end up exactly there. The finiteness of the future lightcone and the resulting necessity of allocating "scarce" resources, the usefulness of a single medium of exchange (which you can see as motivated by coherence theorems if you want), and trade between different entities all seem like very general concepts. So even in futures that are otherwise very alien, but just not in the exact "singleton-run command economy" direction, I expect a high chance that those concepts matter.
3quila
I am still confused. Maybe the crux is that you are not expecting superintelligence?[1] This quote seems to indicate that: "However it seems far from clear we will end up exactly there". Also, your post writes about "labor-replacing AGI" but writes as if the world it might cause near-term lasts eternally ("anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era ('oh, my uncle was technical staff at OpenAI'). The children of the future will live their lives in the shadow of their parents") If not, my response: I don't see why strongly-superintelligent optimization would benefit from an economy of any kind. Given superintelligence, I don't see how there would still be different entities doing actual (as opposed to just-for-fun / fantasy-like) dynamic (as opposed to acausal) trade with each other, because the first superintelligent agent would have control over the whole lightcone. If trade currently captures information (including about the preferences of those engaged in it), it is regardless unlikely to be the best way to gain this information, if you are a superintelligence.[2] 1. ^ (Regardless of whether the first superintelligence is an agent, a superintelligent agent is probably created soon after) 2. ^ I could list better ways of gaining this information given superintelligence, if this claim is not obvious.
5L Rudolf L
If takeoff is more continuous than hard, why is it so obvious that there exists exactly one superintelligence rather than multiple? Or are you assuming hard takeoff? If things go well, human individuals continue existing (and humans continue making new humans, whether digitally or not). Also, it seems more likely than not that fairly strong property rights continue (if property rights aren't strong, and humans aren't augmented to be competitive with the superintelligences, then prospects for human survival seem weak since humans' main advantage is that they start out owning a lot of the stuff—and yes, that they can shape the values of the AGI, but I tentatively think CEV-type solutions are neither plausible nor necessarily desirable). The simplest scenario is that there is continuity between current and post-singularity property ownership (especially if takeoff is slow and there isn't a clear "reset" point). The AI stuff might get crazy and the world might change a lot as a result, but these guesses, if correct, seem to pin down a lot of what the human situation looks like.
4quila
I don't think so, but I'm not sure exactly what this means. This post says slow takeoff means 'smooth/gradual' and my view is compatible with that - smooth/gradual, but at some point the singularity point is reached (a superintelligent optimization process starts). Because it would require an odd set of events that cause two superintelligent agents to be created.. if not at the same time, within the time it would take one to start effecting matter on the other side of the planet relative to where it is[1]. Even if that happened, I don't think it would change the outcome (e.g. lead to an economy). And it's still far from a world with a lot of superintelligences. And even in a world where a lot of superintelligences are created at the same time, I'd expect them to do something like a value handshake, after which the outcome looks the same again. (I thought this was a commonly accepted view here) Reading your next paragraph, I still think we must have fundamentally different ideas about what superintelligence (or "the most capable possible agent, modulo unbounded quantitative aspects like memory size") would be. (You seem to expect it to be not capable of finding routes to its goals which do not require (negotiating with) humans) (note: even in a world where {learning / task-annealing / selecting a bag of heuristics} is the best (in a sense only) method of problem solving, which might be an implicit premise of expectations of this kind, there will still eventually be some Theory of Learning which enables the creation of ideal learning-based agents, which then take the role of superintelligence in the above story) 1. ^ which is still pretty short, thanks to computer communication. (and that's only if being created slightly earlier doesn't afford some decisive physical advantage over the other, which depends on physics)
2Nathan Helm-Burger
I think your expectations are closer to mine in some ways, quila. But I do doubt that the transition will be as fast and smooth as you predict. The AIs we're seeing now have very spiky capability profiles, and I expect early AGI to be similar. It seems likely to me that there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge. I think a single super-powerful ASI is one way things could go, but I also think that there's reason to expect a more multi-polar community of AIs, perhaps blending into each other around the edges of their collaboration, merges made of distilled down versions of their larger selves. I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
3quila
Do you want to look for cruxes? I can't tell what your cruxy underlying beliefs are from your comment. I don't think whether there is an attractor[1] towards cohesiveness is a crux for me (although I'd be interested in reading your thoughts on that anyways), at least because it looks like humans will try to create an optimal agent, so it doesn't need to have a common attractor or be found through one[2], it just needs to be possible at all. Note: I wrote that my view is compatible with 'smooth takeoff', when asked if I was 'assuming hard takeoff'. I don't know what 'takeoff' looks like, especially prior to recursive AI research. Sure (if 'shaping' is merely 'having a causal effect on', not necessarily in the hoped-for direction). Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3] Feel free to ask me probing questions as well, and no pressure to engage.   1. ^ (adding a note just in case it's relevant: attractors are not in mindspace/programspace itself, but in the conjunction with the specific process selecting the mind/program) 2. ^ as opposed to through understanding agency/problem-solving(-learning) more fundamentally/mathematically 3. ^ (Edit to add: I saw this other comment by you. I agree that maybe there could be good governance made of humans + AIs and if that happened, then that could prevent anyone from creating a super-agent, although it would still end with (in this case aligned) superintelligence in my view. I can also imagine, but doubt it's what you mean, runaway processes which are composed of 'many AIs' but which do not converge to superintelligence, because that sounds intuitively-mathematically possible (i.e., where none of the AIs are exactly subject to instrumental convergence, nor have the impulse to do things which create superintelligence, but the process nonetheless spreads and consumes and creates more ~'myopically' powerfu
3Nathan Helm-Burger
I think there are a lot of places where we agree. In this comment I was trying to say that I feel doubtful about the idea of a superintelligence arising once, and then no other superintelligences arise because the first one had time to fully seize control of the world. I think it's also possible that there is time for more than one super-human intelligence to arise and then compete with each other. I think the offense-dominant nature of our current technological milieu means that humanity is almost certainly toast under the multipolar superintelligence scenario unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs). Responses: Yes, that's what I meant. Control seems like not-at-all a default scenario to me. More like the accelerating self-improving AI process is a boulder tumbling down a hill, and humanity is a stone in its path that may alter its trajectory (while likely being destroyed in the process).   a more multi-polar community of AIs More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence. I'm pretty sure we're on a fast-track to either superintelligence-within-ten-years or civilizational collapse (e.g. large scale nuclear war). I doubt very much that any governance effort will manage to delay superintelligence for more than 10 years from now. I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress, not on attempts to pause/delay. I think that algorithmic advance is the most dangerous piece of the puzzle, and wouldn't be much hindered by restrictions on large training runs (which is what people often mean when talking of delay). But, if we're skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-pow
1quila
My response, before having read the linked post: Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it's more than merely possible, e.g. 5%+ likely? That's what I'm reading into "doubtful") Why would the pact protect beings other than the two ASIs? (If one wouldn't have an incentive to protect, why would two?) (Edit: Or, based on the term "governance framework", do you believe the human+AGI government could actually control ASIs?) Thanks for clarifying. It's not intuitive to me why that would make it more likely, and I can't find anything else in this comment about that. I see. That does help me understand the motive for 'control' research more.
4Daniel Kokotajlo
To a first approximation, yes, I believe it would antiquate it all. 
1quila
Okay, thanks for clarifying. I may have misunderstood your comment. I'm still confused by the existence of the original post with this many upvotes.
1niknoble
No one will be buying planets for the novelty or as an exotic vacation destination. The reason you buy a planet is to convert it into computing power, which you then attach to your own mind. If people aren't explicitly prevented from using planets for that purpose, then planets are going to be in very high demand, and very useful for people on a personal level.
2Daniel Kokotajlo
Is your selfish utility linear in computing power? Is the difference between how your life goes with a planet's worth of compute that much bigger than how it goes with half a planet's worth of compute? I doubt it.  Also, there are eight billion people now, and many orders of magnitude more planets, not to mention all the stars etc. "You'll probably be able to buy planets post-AGI for the price of houses today" was probably a massive understatement.

This post seems to misunderstand what it is responding to and underplay a very key point: that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).

It mentions this offhand:

Given sufficiently strong AI, this is not a risk about insufficient material comfort.

But, this was a key thing people were claiming when arguing that money won't matter. They were claiming that personal savings will likely not be that important for guaranteeing a reasonable amount of material comfort (or that a tiny amount of personal savings will suffice).

It seems like there are two importantly different types of preferences:

  • Material needs and roughly log returns (non-positional) selfish preferences
  • Scope sensitive preferences

Indeed, for scope sensitive preferences (that you expect won't be shared with whoever otherwise ends up with power), you want to maximize your power and insofar as money allows for more of this power (e.g. buying galaxies), then money looks good.

However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are ... (read more)

This post seems to misunderstand what it is responding to

fwiw, I see this post less as "responding" to something, and more laying out considerations on their own with some contrasting takes as a foil.

(On Substack, the title is "Capital, AGI, and human ambition", which is perhaps better)

that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).

I agree with this, though I'd add: "if humans retain control" and some sufficient combination of culture/economics/politics/incentives continues opposing arbitrary despotism.

I also think that even if all material needs are met, avoiding social stasis and lock-in matters.

Scope sensitive preferences

Scope sensitivity of preferences is a key concept that matters here, thanks for pointing that out.

Various other considerations about types of preferences / things you can care about (presented without endorsement):

  • instrumental preference to avoid stasis because of a belief it leads to other bad things (e.g. stagnant intellectual / moral / political / cultural progress, increasing autocracy)
    • altruistic preferences combined with a fear that less altruism will resul
... (read more)
9Thomas Kwa
Under log returns to money, personal savings still matter a lot for selfish preferences. Suppose the material comfort component of someone's utility is 0 utils at an consumption of $1/day. Then a moderately wealthy person consuming $1000/day today will be at 7 utils. The owner of a galaxy, at maybe $10^30 / day, will be at 69 utils, but doubling their resources will still add the same 0.69 utils it would for today's moderately wealthy person. So my guess is they will still try pretty hard at acquiring more resources, similarly to people in developed economies today who balk at their income being halved and see it as a pretty extreme sacrifice.
9Benjamin_Todd
True, though I think many people have the intuition that returns diminish faster than log (at least given current tech). For example, most people think increasing their income from $10k to $20k would do more for their material wellbeing than increasing it from $1bn to $2bn. I think the key issue is whether new tech makes it easier to buy huge amounts of utility, or that people want to satisfy other preferences beyond material wellbeing (which may have log or even close to linear returns).
7Guive
There are always diminishing returns to money spent on consumption, but technological progress creates new products that expand what money can buy. For example, no amount of money in 1990 was enough to buy an iPhone. More abstractly, there are two effects from AGI-driven growth: moving to a further point on the utility curve such that the derivative is lower, and new products increasing the derivative at every point on the curve (relative to what it was on the old curve). So even if in the future the lifestyles of people with no savings and no labor income will be way better than the lifestyles of anyone alive today, they still might be far worse than the lifestyles of people in the future who own a lot of capital. If you feel this post misunderstands what it is responding to, can you link to a good presentation of the other view on these issues?

One dynamic initially preventing stasis in influence post-AGI is that different ppl have different discount rates, so those with higher discounts will slowly gain influence over time

Excellent post, thank you. I appreciate your novel perspective on how AI might affect society.

I feel like a lot of LessWrong-style posts follow the theme of "AGI is created and then everyone dies" which is an important possibility but might lead to other possibilities being neglected.

Whereas this post explores a range of scenarios and describes a mainline scenario that seems like a straightforward extrapolation of trends we've seen unfolding over the past several decades.

This post collects my views on, and primary opposition to, AI and presents them in a very clear way. Thank you very much on that front. I think that this particular topic is well known in many circles, although perhaps not spoken of, and is the primary driver of heavy investment in AI.

I will add that capital-dominated societies, e.g resource extraction economies, suffer a typically poor quality of life and few human rights. This is a well known phenomenon (the "resource curse") and might offer a good jumping -off point for presenting this argument to others.

1Isaac Liu
I considered "opposing" AI on similar grounds, but I don't think it's a helpful and fruitful approach. Instead, consider and advocate for social and economic alternatives viable in a democracy. My current best ideas are either a new frontier era (exploring space, art, science as focal points of human attention) or fully automated luxury communism.

While I very much would love a new frontier era (I work at a rocket launch startup), and would absolutely be on board with Culture utopia, I see no practical means to ensure that any of these worlds come about without:

  • Developing proper aligned AGI and making a pivotal turn, i.e creating a Good™ culture mind that takes over the world (fat chance!)
  • Preventing the development of AI entirely

I do not see a world where AGI exists and follows human orders that does not result in a boot, stomping on a face, forever -- societal change in dystopian or totalitarian environments is largely produced via revolution, which becomes nonviable when means of coordination can be effectively controlled and suppressed at scale.

First world countries only enjoy the standard of living they do because, to some degree, the ways to make tons of money are aligned with the well being of society (large diversified investment funds optimize for overall economic well being). Break this connection and things will slide quickly.

1Isaac Liu
Yes, AI will probably create some permanent autocracies. But I think democratic order and responsiveness to societal preferences can survive where it already exists, if a significantly large selectorate of representatives or citizens creates and updates the values for alignment. Fighting AI development is not only swimming against the tide of capitalist competition between companies, but also competition between democratic and autocratic nations. Difficult, if not impossible.

In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labour resources (oil :: AI) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living (and ideally also lives forever). The only realistic forms of human ambition are playing local social and political games within your social network and class. [...] The children of the future will live their lives in the shadow of their parents, with social mobility extinct. I think you should definitely feel a non-zero amount of existential horror at this, even while acknowledging that it could've gone a lot worse.

I think the picture you've painted here leans slightly too heavily on the idea that humans themselves cannot change their fundamental nature to adapt to the conditions of a changing world. You mention that humans will be richer, and will live longer in such a future, but you neglected to point out (at least in this part of the post) that humans can also upgrade their cognition by uploading our minds to computers and then expanding our mental capacities. This would put us on a similar playing ... (read more)

5ryan_greenblatt
The key context here (from my understanding) is that Matthew doesn't think scalable alignment is possible (or doesn't think it is practically feasible) such that humans have a low chance of ending up remaining fully in control via corrigible AIs. (I assume he is also skeptical of CEV style alignment as well.) (I'm a bit confused how this view is consistent with self-augmentation. E.g., I'd be happy if emulated minds retained control without having to self-augment in ways they thought might substantially compromise their values.) (His language also seems to imply that we don't have an option of making AIs which are both corrigibly aligned and for which this doesn't pose AI welfare issues. In particular, if AIs are either non-sentient or just have corrigible preferences (e.g. via myopia), I think it would be misleading to describe the AIs as a "vast underclass".) I assume he agrees that most humans wouldn't want to hand over a large share of resources to AI systems if this is avoidable and substantially zero sum. (E.g., suppose getting a scalable solution to alignment would require delaying vastly transformative AI by 2 years, I think most people would want to wait the two years potentially even if they accept Matthew's other view that AIs very quickly acquiring large fractions of resources and power is quite unlikely to be highly violent (though they probably won't accept this view).) (If scalable alignment isn't possible (including via self-augmentation), then the situation looks much less zero sum. Humans inevitably end up with a tiny fraction of resources due to principle agent problems.)
7Matthew Barnett
I wouldn’t describe the key context in those terms. While I agree that achieving near-perfect alignment—where an AI completely mirrors our exact utility function—is probably infeasible, the concept of alignment often refers to something far less ambitious. In many discussions, alignment is about ensuring that AIs behave in ways that are broadly beneficial to humans, such as following basic moral norms, demonstrating care for human well-being, and refraining from causing harm or attempting something catastrophic, like starting a violent revolution. However, even if it were practically feasible to achieve perfect alignment, I believe there would still be scenarios where at least some AIs integrate into society as full participants, rather than being permanently relegated to a subordinate role as mere tools or servants. One reason for this is that some humans are likely to intentionally create AIs with independent goals and autonomous decision-making abilities. Some people have meta-preferences to create beings that don't share their exact desires, akin to how parents want their children to grow into autonomous beings with their own aspirations, rather than existing solely to obey their parents' wishes. This motivation is not a flaw in alignment; it reflects a core part of certain human preferences and how some people would like AI to evolve. Another reason why AIs might not remain permanently subservient is that some of them will be aligned to individuals or entities who are no longer alive. Other AIs might be aligned to people as they were at a specific point in time, before those individuals later changed their values or priorities. In such cases, these AIs would continue to pursue the original goals of those individuals, acting autonomously in their absence. This kind of independence might require AIs to be treated as legal agents or integrated into societal systems, rather than being regarded merely as property. Addressing these complexities will likely necessit
7ryan_greenblatt
Hmm, I think I agree with this. However, I think there is (from my perspective) a huge difference between: * Some humans (or EMs) decide to create (non-myopic and likely at least partially incorrigible) AIs with their resources/power and want these AIs to have legal rights. * The vast majority of power and resources transition to being controlled by AIs for which the relevant people with resources/power that created these AIs would prefer an outcome in which these AIs didn't end up with this power and they instead had this power. If we have really powerful and human controlled AIs (i.e. ASI), there are many directions things can go in depending on people's preferences. I think my general perspective is that the ASI at that point will be well positioned to do a bunch of the relevant intellectual labor (or more minimally, if thinking about it myself is important as it is entangled with my preferences, a very fast simulated version of myself would be fine). I'd count it as "humans being fully in control" if the vast majority of power controlled by independent AIs are AIs that were intentionally appointed by humans even though making an AI fully under their control was technically feasible with no tax. And, if it was an option for humans to retain their power (as a fraction of overall human power) without having to take (from their perspective) aggressive and potentially prefence altering actions (e.g. without needing to become EMs or appoint a potentially imperfectly aligned AI successor). In other words, I'm like "sure there might be a bunch of complex and interesting stuff around what happens with independent AIs after we transitions through having very powerful and controlled AIs (and ideally not before then), but we can figure this out then, the main question is who ends up in control of resources/power".
7ryan_greenblatt
I remain interested in what a detailed scenario forecast from you looks like. A big disagreement I think we have is in how socciety will react to various choices and I think laying this out could make this more clear. (As far as what a scenario forecast from my perspective looks like, I think @Daniel Kokotajlo is working on one which is pretty close to my perspective and generally has the SOTA stuff here.)
7Matthew Barnett
I’m not entirely opposed to doing a scenario forecasting exercise, but I’m also unsure if it’s the most effective approach for clarifying our disagreements. In fact, to some extent, I see this kind of exercise—where we create detailed scenarios to illustrate potential futures—as being tied to a specific perspective on futurism that I consciously try to distance myself from. When I think about the future, I don’t see it as a series of clear, predictable paths. Instead, I envision it as a cloud of uncertainty—a wide array of possibilities that becomes increasingly difficult to map or define the further into the future I try to look.  This is fundamentally different from the idea that the future is a singular, fixed trajectory that we can anticipate with confidence. Because of this, I find scenario forecasting less meaningful and even misleading as it extends further into the future. It risks creating the false impression that I am confident in a specific model of what is likely to happen, when in reality, I see the future as inherently uncertain and difficult to pin down.

The point of a scenario forecast (IMO) is less that you expect clear, predictable paths and more that:

  • Humans often do better understanding and thinking about something if there is a specific story to discuss and thus tradeoffs can be worth it.
  • Sometimes scenario forecasting indicates a case where your previous views were missing a clearly very important consideration or were assuming something implausible.

(See also Daniel's sibling comment.)

My biggest disagreements with you are probably a mix of:

  • We have disagreements about how society will react to AI (and how AI will react to society) given a realistic development arc (especially in short timelines) that imply that your vision of the future seems implausible to me. And perhaps the easiest way to get through all of these disagreements is for you to concretely describe what you expect might happen. As an example, I have a view like "it will be hard for power to very quickly transition from humans to AIs without some sort of hard takeover especially given dynamics about alignment and training AIs on imitation (and sandbagging)", but I think this is tied up "when I think about the story for how a non-hard-takeover quick transiti
... (read more)
1Dakara
By "software only singularity" do you mean a scenario where all humans are killed before singularity, a scenario where all humans merge with software (uploading) or something else entirely?
8ryan_greenblatt
Software only singularity is a singularity driven by just AI R&D on a basically fixed hardware base. As in, can you singularity using only a fixed datacenter (with no additional compute over time) just by improving algorithms? See also here. This isn't directly talking about the outcomes from this. You can get a singularity via hardware+software where the AIs are also accelerating the hardware supply chain such that you can use more FLOP to train AIs and you can run more copies. (Analogously to the hyperexponential progress throughout human history seemingly driven by higher population sizes, see here.)
8Daniel Kokotajlo
I don't think that's a crux between us -- I love scenario forecasting but I don't think of the future as a series of clear predictable paths, I envision it as wide array of uncertain possibilities that becomes increasingly difficult to map or define the further into the future I look. I definitely don't think we can anticipate the future with confidence.

Upvoted, and I disagree.  Some kinds of capital maintain (or increase!) in value.  Other kinds become cheaper relative to each other.  The big question is whether and how property rights to various capital elements remain stable.

It's not so much "will capital stop mattering", but "will the enforcement and definition of capital usage rights change radically".

My main default prediction here is that we will avoid either the absolute best case or the absolute worst case scenario, because I predict intent alignment works well enough to avoid extinction of humanity type scenarios, but I also don't believe we will see radical movements toward equality (indeed the politics of our era is moving towards greater acceptance of inequality), so capitalism more or less survives the transition to AGI.

I do think dynamism will still exist, but it will be very limited to the upper classes/very rich of society, and most people will not be a part of it, and I'm including uploaded humans here in this calculation.

To address this:

Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)

To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than t... (read more)

4gugu
How certain are you of this, and how much do you think it comes down more to something like "to what extent can disempowered groups unionise against the elite?". To be clear, by default I think AI will make unionising against the more powerful harder, but it might depend on the governance structure. Maybe if we are really careful, we can get something closer to "Direct Democracy", where individual preferences actually matter more!
4Noosphere89
I am focused here on short-term politics in the US, which ordinarily would matter less, if it wasn't likely that world-changing AI would be built in the US, but given that it might, it becomes way more important than normal.
4L Rudolf L
So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point! Considerations: * offense/defense balance (if offense wins very hard, it's harder to let everyone do their own thing) * tunability-of-AGI-power / implementability of the harm principle (if you can give everyone AGI that can follow very well the rule "don't let these people harm other people", then you can give that AGI safely to everyone and they can build planets however they like but not death ray anyone else's planets) The latter might be more of a "singleton that allows playgrounds" rather an actual multipolar world though. Some of my general worries with singleton worlds are: * humanity has all its eggs in one basket—you better hope the governance structure is never corrupted, or never becomes sclerotic; real-life institutions so far have not given me many signs of hope on this count * cultural evolution is a pretty big part of how human societies seem to have improved and relies on a population of cultures / polities * vague instincts towards diversity being good and less fragile than homogeneity or centralisation Thanks!
5Noosphere89
(I also commented on substack) This applies, but weaker even in a non-vulnerable world, because the incentives are way weaker for peaceful cooperation of values in AGI-world. I do think this requires severely restraining open-source, but conditional on that happening, I think the offense-defense balance/tunability will sort of work out. Yeah, I'm not a fan of singleton worlds, and tend towards multipolar worlds. It's just that it might involve a loss of a lot of life in the power-struggles around AGI. On governing the commons, I'd say Elinor Ostrom's observations are derivable from the folk theorems of game theory, which basically says that any outcome can be a Nash Equilibrium (with a few conditions that depend on the theorem) can be possible if the game is repeated and players have to deal with each other. The problem is that AGI weakens the incentives for players to deal with each other, so Elinor Ostrom's solutions are much less effective. More here: https://en.wikipedia.org/wiki/Folk_theorem_(game_theory)
2Nathan Helm-Burger
I believe that the near future (next 10 years) involves a fragile world and heavily offense-dominant tech, such that a cohesive governing body (not necessarily a single mind, it could be a coalition of multiple AIs and humans) will be necessary to enforce safety. Particularly, preventing the creation/deployment of self-replicating harms (rogue amoral AI, bioweapons, etc.). On the other hand, I don't think we can be sure what the more distant future (>50 years?) will look like. It may be that d/acc succeeds in advancing defense-dominant technology enough to make society more robust to violent defection. In such a world, it would be safe to have more multi-polar governance. I am quite uncertain about how the world might transition to uni-polar governance, whether this will involve a singleton AI or a world government or a coalition of powerful AIs or what. Just that the 'suicide switch' for all of humanity and its AIs will for a time be quite cheap and accessible, and require quite a bit of surveillance and enforcement to ensure no defector can choose it.

You say: I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them.

It seems to me that this aggregates quite different things, at least if looking at the situation in terms of personal finance. Consider four people who have the following investments, that let's suppose are currently of equal value:

  1. Money in a savings account at a bank.
  2. Shares in a company that owns a nuclear power plant.
  3. Shares in a company that manufactures nuts and bolts.
  4. Shares in a company that helps employers recruit new employees.

These are all ... (read more)

7L Rudolf L
Important other types of capital, as the term is used here, include: * the physical nuclear power plants * the physical nuts and bolts * data centres * military robots Capital is not just money! Because humans and other AIs will accept fiat currency as an input and give you valuable things as an output. All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that, unless they're hiding from human government oversight or breaking some capacity constraint in the financial system, in which case they can just use crypto instead. Military robots are yet another type of capital! Note that if it were human soldiers, there would be much more human leverage in the situation, because at least some humans would need to agree to do the soldering, and presumably would get benefits for doing so, and would use the power and leverage they accrue from doing so to push broadly human goals. Or then the recruitment company pivots to using human labour to improve AI, as actually happened with the hottest recent recruiting company! If AI is the best investment, then humans and AIs alike will spend their efforts on AI, and the economy will gradually cater more and more to AI needs over human needs. See Andrew Critch's post here, for example. Or my story here.
6Radford Neal
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that Because using an existing medium of exchange (that's not based on the value of a real commodity) involves transferring real wealth to the current currency holders. Instead, they might, for example, start up a new bitcoin blockchain, and use their new bitcoin, rather than transfer wealth to present bitcoin holders. Maybe they'd use gold, although the current value of gold is mostly due to its conventional monetary value (rather than its practical usefulness, though that is non-zero).

Upstarts will not defeat them, since capital now trivially converts into superhuman labour in any field.


It is false today that big companies with 10x the galaxy brains and 100x the capital reliably outperform upstarts.[1] 

Why would this change? I don't think you make the case. 

  1. ^

    My favorite example, though it might still be falsified. Google invented transformers, owns DeepMind, runs their own data centres, builds their own accelerators and have huge amounts of them, have tons of hard to get data (all those books they scanned before that became no

... (read more)
4lc
They have "galaxy brains", but applying those galaxy brains strategically well towards your goals is also an aspect of intelligence. Additionally, those "galaxy brains" may be ineffective because of issues with alignment towards the company, whereas in a startup often you can get 10x or 100x more out of fewer employees because they have equity and understand that failure is existential for them. Demis may be smart, but he made a major strategic error if his goal was to lead in the AGI race, and despite the fact that the did he is still running DeepMind, which suggests an alignment/incentive issue with regards to Google's short term objectives.
4Alexander Gietelink Oldenziel
OpenAI is worth about 150 billion dollars and has the backing of microsoft. Google gemini is apparently competitive now with Claude and gpt4. Yes google was sleeping on LLMs two years ago and OpenAI is a little ahead but this moat is tiny.
4L Rudolf L
For example: * Currently big companies struggle to hire and correctly promote talent for the reasons discussed in my post, whereas AI talent will be easier to find/hire/replicate given only capital & legible info * To the extent that AI ability scales with resources (potentially boosted by inference-time compute, and if SOTA models are no longer available to the public), then better-resourced actors have better galaxy brains * Superhuman intelligence and organisational ability in AIs will mean less bureaucratic rot and communication bandwidth problems in large orgs, compared to orgs made out of human brain -sized chunks, reducing the costs of scale Imagine for example the world where software engineering is incredibly cheap. You can start a software company very easily, yes, but Google can monitor the web for any company that makes revenue off of software, instantly clone the functionality (because software engineering is just a turn-the-crank-on-the-LLM thing now) and combine it with their platform advantage and existing products and distribution channels. Whereas right now, it would cost Google a lot of precious human time and focus to try to even monitor all the developing startups, let alone launch a competing product for each one. Of course, it might be that Google itself is too bureaucratic and slow to ever do this, but someone else will then take this strategy. C.f. the oft-quoted thing about how the startup challenge is getting to distribution before the incumbents get to distribution. But if the innovation is engineering, and the engineering is trivial, how do you get time to get distribution right? (Interestingly, as I'm describing it above the most key thing is not so much capital intensivity, and more just that innovation/engineering is no longer a source of differential advantage because everyone can do it with their AIs really well) There's definitely a chance that there's some "crack" in this, either from the economics or the nature of AI perf

Blue Origin was started two years earlier (2000 v 2002), had much better funding for most of its history,

This claim is untrue. SpaceX has never had less money than Blue Origin. It is maybe true that Blue Origin had fewer obligations attached to this money, since it was exclusively coming from Bezos, rather than a mix of investment, development contracts, and income for SpaceX, but the baseline claim that SpaceX was “money-poor” is false.

To have a very stable society amid exponentially advancing technology would be very strange: throughout history, seemingly permanent power structures have consistently been disrupted by technological change—and that was before tech started advancing exponentially. Roman emperors, medieval lords, and Gilded Age industrialists all thought they'd created unchangeable systems. They were all wrong.

This is one of the most important read of the entire read for me (easily top 5). Thank you.

I've heard many people say something like "money won't matter post-AGI". This has always struck me as odd, and as most likely completely incorrect.


Given our exchange in the comments, perhaps you should clarify that you aren't trying to argue that saving money to spend after AGI is a good strategy, you agree it's a bad strategy and sometimes when people say "money won't matter post-AGI" they are meaning to say "saving money to spend after AGI is a bad strategy" whereas you are taking it to mean "we'll all be living in egalitarian utopia after AGI" or something like that.

6L Rudolf L
I already added this to the start of the post: However: I think my take is a bit more nuanced: * in my post, I explicitly disagree with focusing purely on getting money now, and especially oppose abandoning more neglected ways of impacting AI development in favour of ways that also optimise for personal capital accumulation (see the start of the takeaways section) * the reason is that I think now is a uniquely "liquid" / high-leverage time to shape the world through hard work, especially because the world might soon get much more locked-in and because current AI makes it easier to do things * (also, I think modern culture is way too risk averse in general, and worry many people will do motivated reasoning and end up thinking they should accept the quant finance / top lab pay package for fancy AI reasons, when their actual reason is that they just want that security and status for prosaic reasons, and the world would benefit most from them actually daring to work on some neglected impactful thing) * however, it's also true that money is a very fungible resource, and we're heading into very uncertain times where the value of labour (most people's current biggest investment) looks likely to plummet * if I had to give advice to people who aren't working on influencing AI for the better, I'd focus on generically "moving upwind" in terms of fungible resources: connections, money, skills, etc. If I had to pick one to advise a bystander to optimise for, I'd put social connections above money—robust in more scenarios (e.g. in very politicky worlds where money alone doesn't help), has deep value in any world where humans survive, in post-labour futures even more likely to be a future nexus of status competition, and more life-affirming and happiness-boosting in the meantime This is despite agreeing with the takes in your earlier comment. My exact views in more detail (comments/summaries in square brackets): Regarding: I think there's a decent chance we'll l
7Daniel Kokotajlo
Thanks for the clarification!  I am not sure you are less optimistic than me about things going well for most humans even given massive abundance and tech. We might not disagree. In particular I think I'm more worried about coups/power-grabs than you are; you say both considerations point in different directions whereas I think they point in the same (bad) direction. I think that if things go well for most humans, it'll either be because we manage to end this crazy death race to AGI and get some serious regulation etc., or because the power-hungry CEO or President in charge is also benevolent and humble and decides to devolve power rather than effectively tell the army of AGIs "go forth and do what's best according to me." (And also in that scenario because alignment turned out to be easy / we got lucky and things worked well despite YOLOing it and investing relatively little in alignment + control)
3wassname
We don't have to make individual guesses. It seems reasonable to get a base rate from human history. Although we may all disagree about how much this will generalise to AGI, evidence still seems better than guessing. My impression from history is that coups/power-grabs and revolutions are common when the current system breaks down, or when there is a big capabilities advance (guns, radio, printing press, bombs, etc) between new actors and old. War between old actors also seems likely in these situations because an asymmetric capabilities advance makes winner-takes-all approaches profitable. Winning a war, empire, or colony can historically pay off, but only if you have the advantage to win.

"if you're willing to assume arbitrarily extreme wealth generation from AI"

Let me know if I'm missing something, but I don't think this is a fair assumption. GDP increases when consumer spending increases. Consumer spending increases when wages increase. Wages are headed to 0 due to AGI. 

Note: the current GDP per capita of the U.S. is $80,000. 

[This comment is no longer endorsed by its author]Reply

Some comments.

 

[...] We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.

Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that howev

... (read more)

I have thought this way for a long time and glad someone was able to express my position and predictions more clearly than I ever could.

This said, I think the new solution (rooted in history) is the establishment of new frontiers. Who will care about relative status if they get to be the first human to set foot on some distant planet, or guide AI to some novel scientific or artistic discovery? Look to the human endeavors where success is unbounded and preferences are required to determine what is worthwhile.

re: post main claim, I think local entrepreneurship would actually thrive

skipping network effects; would you rather use taxi app created by faceless VC or the one created by your neighbour?

(actually it's not even a fake example, see https://techcrunch.com/2024/07/15/google-backs-indian-open-source-uber-rival-namma-yatri/)

it's also already happening in the indie hacker space – people would prefer to buy something that's #buildinpublic versus the same exact product made by google 

3wahala
People will use the cheaper one, the faceless VC has the capital to subsidize costs till every competitor is flushed out.

Thank you for your post. I've been looking for posts like this all over the internet that get my mind racing about the possibilities of the near future.

I think the AI discussion suffers from definitional problems. I think when most people talk about money not mattering when AGI arrives (myself included), we tend to define AGI as something closer to this:

"one single AI system doing all economic planning."

While your world model makes a lot of sense, I don't think the dystopian scenario you envision would include me in the "capital class". I don't have the we... (read more)

I agree with the ideas of AI being labor-replacing, and I also agree that the future is likely to be more unequal than the present.

Even so, I strongly predict that the post-AGI future will not be static. Capital will not matter more than ever after AGI: instead I claim it will be a useless category.

The crux of my claim is that when AI replaces labor and buying results is easy, the value will shift to the next biggest bottlenecks in production. Therefore future inequality will be defined by the relationship to these bottlenecks, and the new distinctions wil... (read more)

2L Rudolf L
Chip fabs and electricity generation are capital! Yep, AI buying power winning over human buying power in setting the direction of the economy is an important dynamic that I'm thinking about. Yep, this is an important point, and a big positive effect of AI! I write about this here. We shouldn't lose track of all the positive effects.

I share your existential dread completely, however some things I find even more pessimistic than you outlined.

  1. It is entirely possible that intellectual labor is automated first, thus most good jobs are gone, but humans are not. Creating fascist-like ideologies and religions and then sending a bunch of now otherwise useless humans to conquer more land could become a winning strategy, especially given that some countries arguably employ it right now (e. g. Russia).
  2. It is unlikely that 10x global economic growth happens as a result of labor replacing AGI - th
... (read more)

This post and many of the comments are ignoring one of the main reasons that money becomes so much more critical post-AGI. It's because of the revolution in self-modification that ensues shortly afterwards.

Pre-AGI, a person can use their intelligence to increase their money, but not the other way around; post-AGI it's the opposite. The same applies if you swap intelligence for knowledge, health, willpower, energy, happiness set-point, or percentage of time spent awake.

This post makes half of that observation: that it becomes impossible to increase your mon... (read more)

6Noosphere89
  I would go further and say that augmented humans are probably more risky than AIs, because you can't do a lot of the experimentation on a human that is legal to do to AI, and importantly it's way riskier from a legal perspective and a difficulty perspective to align a human to you, because it is essentially brainwashing, and it's easier to control an AI's data source than a human's data source. This is a big reason why I never really liked the augmentation of humans path to solve AI alignment that people like Tsvi Benson-Tilsen want, because you now possibly have 2 alignment problems, not just 1 (link is below):   https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods

[sorry, have only skimmed the post, but I feel compelled to comment.]


I feel like unless we make a lot of progress on some sort of "Science of Generalisation of Preferences", for more abstract preferences (non-biological needs mostly fall into this), even if certain individuals have, on paper, much more power than others, at the end of the day, they likely rely on vastly superintelligent AI advisors to realise those preferences, and at that point, I think it is the AI advisor _really_ in control.
I'm not super certain of this, like, the Catholic Church defin... (read more)

  • Money will be able to buy results in the real world better than ever.
  • People's labour gives them less leverage than ever before.
  • Achieving outlier success through your labour in most or all areas is now impossible.
  • There was no transformative leveling of capital, either within or between countries.

If this is the "default" outcome there WILL be blood. The rational thing to do in this case it to get a proper prepper bunker and see whats left when the dust have settled. 

I've also commented on Substack, but wanted to comment in a different direction here (which I hope is closely aligned to LessWrong values). This article feels like the first part of the equation for describing possible AI futures. I starts with the basis that because labour will be fully substitutable by AI, the value goes to zero (seems correct) to me. What about following up with the consequences of that loss of value? The things that people spend their earnings will go to zero too. What's in that group? The service industry, homes. 

Metrics we can t... (read more)

Interesting post. Some comments from an economist. 

You note,

The key economic effect of AI is that it makes capital a more and more general substitute for labour. There's less need to pay humans for their time to perform work, because you can replace that with capital (e.g. data centres running software replaces a human doing mental labour).

And also, 

Let's assume human mental and physical labour across the vast majority of tasks that humans are currently paid wages for no longer has non-trivial market value, because the tasks can be done better/fa

... (read more)

Humans seem way more energy and resource efficient in general, paying for top talent is an exception not the rule- usually it's not worth paying for top talent.

Likely to see many areas where better economically to save on compute/energy by having human do some of the work.

Split information workers vs physical too, I expect them to have very different distributions of what the most useful configuration is.

This post ignores likely scientific advances in bioengineering and cyborg surgeries, I expect humans to be way more efficient for tons of jobs once the standard is 180 IQ with a massive working memory

A great post.

Our main struggle ahead as a species is to ensure UBI occurs and in a generous rather than meager way. This direction is not at all certain and we should be warned by your example of feudalism as an alternate path that is perhaps looming as more likely. Nevertheless I agree we will see some degree of UBI because the alternative is too painful.

One of the ways you should add for those without capital to still rise post-AGI is celebrity status in sports, entertainment, arts. Consider that today humans still enjoy Broadway and still play chess, ev... (read more)

I agree that this is a likely scenario. Very nice writeup. Post AGI is a completely resource and capital based economy Albeit I'm uncertain if humans will be allowed to keep their fiat currency, stocks, land, factories etc.

interesting angle: given space travel, we'll have civilizations on other planets, that can't communicate fast enough with the mainland. presumanly, social hierarchies would be vastly different, and much more fluid there versus here on Earth

Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched.

 

And yet, ... society appears to be caring more about humans.       

And yet, ... existing powers (specifically the state) seem even less effective and entrenched. Open Borders policies are clearly an act of desperation ... while these policies appear to have been broadly rejected by the electorate. The state only op... (read more)

My main issue with this post is that it seems substantially concerned with losing the ability for humans to achieve significant levels of wealth and power relative to each other, which I agree is important for avoiding a calcified ruling class (which tends to go poorly for a society, historically). But it should be viewed as a transitional concern as we look towards building a society where radical wealth disparities (critically, here defined as the power of the wealthy to effectively incentivize the less wealthy to endure unwanted experiences or circumsta... (read more)

But why be so nihilistic about this? We can strive to conquer the solar system, the galaxy and the universe. Strive to understand why does the universe exist? Those seem pretty important to be worked on.

Great post!

What are your thoughts on guild effects in the sense that some of the changes you have described might be prevented through social contracts? Actors and screenwriters have successfully gone on strike to preserve their leverage, and many other professions are regulated. 

I think this might be a weak counter-argument, but nonetheless, it might distort the effects of AGI and slow down the concentration of capital.