“Show me the incentive, and I’ll show you the outcome.” – Charlie Munger

Economists are used to modeling AI as an important tool, so they don’t get how it could make people irrelevant. Past technological revolutions have driven human potential further. The agrarian revolution birthed civilizations; the industrial revolution let us scale them.

But AGI looks a lot more like coal or oil than the plow, steam engine, or computer. Like those resources:

  • It will require immensely wealthy actors to discover and harness.
  • Control will be concentrated in the hands of a few players, mainly the labs that produce it and the states where they reside.
  • The states and companies that earn rents mostly or entirely from it won’t need to rely on people for revenue.
  • It will displace the previous fuel of civilization. For coal, it was wood. For AGI, it’s us.

On December 28, Rudolf published Capital, AGI, and human ambition. He summarized his argument as:

Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched.

My goal is to give this phenomenon a name and build the evidentiary case for it. Potential solutions will be in a future post.

This problem looks a lot like the plague that affects rentier states, or states that predominantly rely on rents from a resource for their wealth instead of taxes from their citizens. These states suffer from the resource curse – despite having a natural source of income, they do worse than their economically diverse peers at improving their ordinary citizens’ living standards.

Powerful actors that adopt labor force-replacing AI systems will face rentier state-like incentives with far higher stakes. Because their revenues will come from intelligence on tap instead of people, they won’t receive returns on the investments we consider prerequisites to sustenance like education to prepare people for employment, employment and salaries, or a welfare state for the unemployed. As a result, they won’t invest – and their people will be unable to sustain themselves as a result. Humans need not apply, and so humans will not get paid.

This is the intelligence curse – when powerful actors create and implement general intelligence, they will lose their incentives to invest in people.

Before we begin, my assumptions are:

I believe that artificial general intelligence (AGI), specifically “a highly autonomous system that outperforms humans at most economically valuable work” is technologically achievable and >90% likely to exist in the next 1-20 years (and honestly, 10 years feels way too long). You should too.[1]

Once AI systems that are better, cheaper, faster, and more reliable than humans at most economic activity are widely available, the intelligence curse should begin to take effect. We should expect to be locked into the outcome 1-5 years after this moment.

Why powerful actors care about you

By powerful actors, I mean large organizations such as states, corporations, and bureaucracies that shape the world we live in and how we interact with it.

Rudolf offers an explanation for why states care about their people:

Since the industrial revolution, the interests of states and people have been unusually aligned. To be economically competitive, a strong state needs efficient markets, a good education system that creates skilled workers, and a prosperous middle class that creates demand. It benefits from using talent regardless of its class origin. It also benefits from allowing high levels of freedom to foster science, technology, and the arts & media that result in global soft-power and cultural influence. Competition between states largely pushes further in all these directions—consider the success of the US, or how even the CCP is pushing for efficient markets and educated rich citizens, and faces incentives to allow some freedoms for the sake of Chinese science and startups. Contrast this to the feudal system, where the winning strategy was building an extractive upper class to rule over a population of illiterate peasants and spend a big share of extracted rents on winning wars against nearby states

Powerful actors don’t care about you out of the goodness of their heart. They care about you for two reasons:

  1. You offer a return on investment, usually through taxes or profits.
  2. You impact their ability to retain power, either through democratic means like voting or through credible threats to a regime.

Most states in the modern world are diversified economies, meaning value comes from many different sectors and human activities, rather than a single or handful of sources. They rely on taxing people and corporations to generate revenue. The best way for them to increase their revenue is to increase their citizens’ productivity. You could try instead to do this by increasing taxes, but you can only tax what is being generated, yielding an upper limit. Instead, the state is incentivized to produce engineers, entrepreneurs, innovators, and other economically productive workers and create an environment for them to return on the investment. To do so, they tend to:

  • Establish good schools, research institutions, and universities
  • Build infrastructure like roads and public transportation
  • Set up reliable governing systems and courts to protect property rights
  • Protect speech and the flow of information
  • Support small business formation
  • Foster competitive markets
  • Create social safety nets to support risk-taking

These increase the productivity of citizens and increase the surface area of luck for innovation to occur. Equally importantly, these are the kinds of things that lift people out of abject poverty, increase living standards, and foster political and economic freedoms. With good schools, infrastructure, and competitive markets, a citizen can train for and find a high-paying job that exceeds their basic needs. And with reliable governing systems, fair courts, and free speech, a citizen can petition their government for their needs without the fear of becoming a political prisoner. They gain bargaining power through their votes and their economic output, so they can force changes that raise their standards of living. As a result, sometimes states capitulate to citizens' demands even if it will cost them.

A similar phenomenon affects corporations. Take, for example, the exorbitant salaries of Silicon Valley. Tech workers (until recently) have a skill set companies desperately need to make more money. Those workers are a hot commodity and competition to attract them is fierce. To win them over, companies pay large salaries, offer stock options, purchase pool tables, offer 24-7 free meals from a Michelin star chef, and do their laundry. No one is seriously arguing that the company laundry service is 10x’ing revenue, but it might win over a potential employee or keep an otherwise unsatisfied one from leaving for a competitor. The employees have bargaining power, so they can demand lavish perks that improve their quality of life.

This creates a feedback loop – as regular people make powerful actors more money, they are more likely to cater to them. Will education 10x your population’s (and thus the state’s) lifetime earnings? Build the damn schools. Will offering paid family leave get better employees for your company? Change the damn policy.

The resource curse

We already have societies that divorce their nation’s economic output from their human capital. They’re called rentier states. These states – including Venezuela, Saudi Arabia, Norway, and Oman, derive most of their earnings from resources (usually oil), rather than the productive output of their citizens.

You would expect the people in states with free money in the ground to be wealthy. Just dig it out of the ground and sell it to willing buyers. Why worry about building a diverse economy? You’re literally walking on money.

The Democratic Republic of Congo has over $24 trillion worth of untapped minerals in their ground. How have their citizens fared? According to the World Bank:

Most people in DRC have not benefited from this wealth. A long history of conflict, political upheaval and instability, and authoritarian rule have led to a grave, ongoing humanitarian crisis. In addition, there has been forced displacement of populations. These features have not changed significantly since the end of the Congo Wars in 2003.

DRC is among the five poorest nations in the world.  An estimated 73.5% of Congolese people lived on less than $2.15 a day in 2024.  About one out of six people living in extreme poverty in SSA lives in DRC.

What’s going on here? How can it be that trillions in total available resources have resulted in abject poverty?

Economists and political scientists call this the resource curse. Countries with abundant natural resources tend to experience poorer economic growth and higher rates of poverty than their economically diverse peers.

There are many factors that lead to the resource curse, but I’m going to focus on a core one: the incentives they create to stop caring about your people’s economic well being.

Because they earn money from resources, rentier states have no incentive to pay regular people today or invest in them tomorrow. Building better schools doesn’t earn them more money. They invest just as much as it takes to move the oil out of the ground, onto trucks, and out to the ports.[2] It’s not that their citizens couldn’t do anything worth taxing, it’s that there’s no reason to develop them into a taxable population. Why ask your people for money when you can get it from the ground?

Without money, regular people struggle to make demands. In autocracies, there’s no incentive to care about them unless they credibly threaten your power. Those who control the rents can extract wealth without worrying about everyone else.

So what do the lives of their citizens look like? Dr. Ferdinand Ebil and Dr. Steffen Hertog offer two competing visions:

There are few issues on which comparative politics theories offer more sharply contrasting predictions than on the link between resource rents and government welfare provision. Some authors, especially those in the tradition of “rentier state theory,” expect oil-rich rulers to engage in mass co-optation, politically pacifying their population with expansive welfare policies (Beblawi and Luciani 1987; Karl 1997). Others, especially those proposing formal models of politics in oil-rich states, expect rentier rulers to neglect their population. As rents are siphoned off by a small ruling elite that does not need a domestic economic basis for their self-enrichment, welfare provision is minimal and misery spreads (Acemoglu, Robinson and Verdier 2004; Mesquita and Smith 2009).

There are empirical examples for both trajectories. Oman and Equatorial Guinea have broadly comparable levels of natural resource rents per capita—slightly above 8,000 USD per capita in the 1995 to 2014 period (Ross 2013). Both have been ruled by the same autocrats since the 1970s, when both countries were desperately poor. Under Sultan Qaboos, Omani public services have expanded at a rapid pace, leading to one of the world’s fastest declines in child mortality, from 159 per one thousand live births in 1971 to 9 by 2010, far below the Middle East average of 32. In Teodoro Obiang’s Equatorial Guinea, the state outside of the security services remains embryonic, the vast majority of the population continues to live in abject poverty, and infant mortality has declined painfully slowly: from 263 in 1971 to 109 in 2010, remaining above the (high) sub-Saharan average of 89. Access to rentier wealth is monopolized by the president’s small entourage (Wood 2004).

Occasionally, rentier states result in large social safety nets.[3] But in most cases, they result in abject poverty for all but the few who control streams of rent.[4] Why? Ebil and Hertog provide an answer:

We concur with formal models of politics in resource-rich countries that ruling elites seek to ensure survival in power. Public policies are subject to this overarching goal and reflect elites’ assessment of threats to their rule. Within these constraints, elites will seek to maximize their personal rents from resource revenues.

We also agree with existing literature that the relative economic pay-off of welfare provision is lower in resource-based regimes, while its potential modernization effects are politically undesired (Acemoglu and Robinson 2006; Mesquita and Smith 2009). All else being equal, we therefore expect oil-rich regimes to establish narrow kleptocratic coalitions with limited welfare provision and rampant elite self-enrichment.

This effect doesn’t map onto widespread technologies, because they rely on regular people to use them in their workflows to increase productivity. What about AGI?

AGI looks more like a resource than a technology

Imagine for a moment that you are the CEO of a large company. Employing people is an investment you make. You pay them salaries which make up a large chunk of your total budget. In return, they do work that helps you generate revenue. Every year, you hire thousands of entry-level analysts to do the grunt work of your company like collecting data, writing reports, or making pretty powerpoint slides. You’ll also train them and promote them as other employees move up the corporate ladder. Their work output makes you money today. In 20 years, many of these analysts will be senior employees, and one might even replace you!

Hiring analysts serves two purposes:

  • Create a labor force to do the grunt work today
  • Build the bench that will replace existing hires as they age out

In the 2010s, laptops became widely available. Instead of clunky desktop computers, your analysts could now work from anywhere. They could take detailed notes in meetings and collaborate in the breakout room. But the laptops couldn’t replace the analysts, because you couldn’t give a laptop a task in plain English and expect them to do it. Instead, you needed the analysts to use laptops to access their benefits.

So you bought all your analysts laptops. It made nearly all of them more productive, which resulted in increased profits for your company. The laptops were a tool to be used by the analysts, but it didn’t 1) enable one analyst to do the job of 10 or 2) automate the analysts entirely.

Fast forward to 2030. BigLab just released an AI agent powered by GPT-8. It completes any task 20% faster and 10% better than any of your analysts. Oh, and running it to do the work of one analyst costs $10,000 per year – that’s at least an 80% cost reduction. It might let your best analyst do the job of 10, or you could use it to clone the best one and automate the analyst class entirely.

And it’s not just better – it’s more predictable. AI will remove the bottlenecks in finding talent by erasing the difficulty in finding, accurately judging, and hiring talent in any field. Turning to Rudolf:

If you want to convert money into results, the deepest problem you are likely to face is hiring the right talent. And that comes with several problems:

  1. It's often hard to judge talent, unless you yourself have considerable talent in the same domain. Therefore, if you try to find talent, you will often miss.
  2. Talent is rare (and credentialed talent even more so—and many actors can't afford to rely on any other kind, because of point 1), so there's just not very much of it going around.
  3. Even if you can locate the top talent, the top talent tends to be less amenable to being bought out by money than others.

AGI will not just be better than your analyst. It will be reliably better. You will know exactly how it will perform, either before integrating it or shortly thereafter. You could predict how much better it will get with each successive iteration. In a few months or years after it gets better than your analysts, it’ll get better than you at making strategic decisions for the company.

Maybe you really like the existing analysts and are skeptical of this new system. You integrate it as a trial, and in a year it’s outperforming all of them. In fact, keeping humans in the loop slows down the system and produces human results. Are you going to hire more analysts? No. Your future analyst classes are going to wildly shrink. And if you hit hard times as a company, you’ll remember that you can fire most of your staff and get better results.

With all this in mind, why the hell wouldn’t you fire your analysts? They are more expensive, worse at the job, and unreliable. Sure, Mike interviews well and is very nice to be around, but companies fire people their leadership personally likes all the time. And if your company doesn’t fire them, you will be crushed by competition that does.

Do you know what else performs like this? Natural resources. I know what oil does, how much of it I will need to do a thing that requires energy, and which kind of oil is best suited for my purpose. When I need gas for my car, I don’t have to interview or reference check 10 gas stations and make a gamble on which one is most likely to get my car from point A to B. All I need to do is pull in, confirm the type I need for my car, and fill up my tank.

What oil did for energy, AGI will do for anything that will require intelligence. It will easily slot in, reliably do a job, and do it better than any of its predecessors (including you) could ever do. Every actor – every company, every bureaucracy, every government – will be under competitive pressure to get humans out and their AI successors in. AGI will be domain agnostic – the goal is not to get superhuman abilities in one field, but in all of them. It will come for the programmer and the writer and the analyst and the CEO.

This is not hypothetical. We are starting to see pre-AGI systems shrink analyst classes, change personnel strategies, and trigger layoffs. Remember that today is the worst these systems will ever be. You should expect that they will become more capable as time goes on. As they get better, their impact on the labor market will grow rapidly. As Aschenbrenner says, “that doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.”

We are heading towards the default outcome, charted by the default incentives. What are those incentives, and what world will they create?

Defining the Intelligence Curse

The intelligence curse describes the incentives in a post-AGI economy that will drive powerful actors to invest in artificial intelligence instead of humans. If AI can do your job cheaper and faster, there isn’t a reason to hire you. But more importantly, there isn’t an economic reason to invest in your lifelong productivity, take care of you, or keep you around. We could produce unparalleled value with fully automated everything, but if the spoils are distributed like the worst rentier states it will not result in prosperity for the masses. 

A common rebuttal I’ve heard is that some jobs can never be automated because we will demand humans do them. I hear this a lot about teachers. I think most parents would strongly prefer a real, human teacher to watch their kids throughout the day. But this argument totally misses the bigger picture: it’s not that there won’t be a demand for teachers, it’s that there won’t be an incentive to fund schools. I can repeat this ad nauseam for anything that invests in regular people’s productive capacity, any luxury that relies on their surplus income, or any good that keeps them afloat.[5] By default, powerful actors won’t build things that employ humans or provide them resources, because they won’t have to.

Taxes will still be a relevant form of income for governments, but only those from corporations. Likewise, corporations will make money from their AI systems, not from the work people produce. The investments that the developed world associates with a high quality of life — salaries, education, infrastructure, stable governance, etc — will no longer provide a return. People won’t make powerful actors any money.

Where might the powerful actors get their money from instead?

States will earn money from corporate taxes. Companies that produce advanced AI systems and companies that use them will generate large revenues. As they get bigger, states will tax them more. In 2022, corporate taxes made up 11.5% of the average OECD state’s revenue – a sample of high-performing, diverse economies. In the US, it’s only 6.5%. Like Norway, Saudi Arabia, and the Democratic Republic of the Congo, states will rely less on income taxes and more on taxes from AI companies or other companies that enable powerful actors to accomplish goals. When state revenue breakdowns look more like these countries than the OECD average, you’ll know the intelligence curse has taken hold.

AI labs will make money by becoming the new rentiers. The stated goals of the AI labs are to build AGI. One of the labs is changing their corporate structure to ensure they can capitalize on it. Once they have a system that can do it all, do you think they’ll just give it away? They’ll become a horizontal layer of the economy, extracting rents from all economic activity by selling it to powerful actors who use it to replace their workers. Initially, some wrappers might be able to make money from this by scaffolding agents to work better in specific verticals (this is already happening). Don’t expect this to last – remember, the goal is to do everything. This will make them a significant percentage of total global GDP, enabling them to wield economic power that was previously exclusive to states.

Companies will trade amongst themselves and other powerful actors. Land, energy, compute, manufacturing hubs, data centers, and many more things that exist in the physical world and enable actors to accomplish goals will have value. The cafe chain and the marketing firm will be irrelevant, but the landlord and energy company will be able to make more money than ever before. Powerful actors, likely human-controlled (at least for a while), will extract the vast majority of value from these sources.

One place where the intelligence curse differs from the rentier curse is the long-term incentive to diversify. As I’ve already mentioned, the climate effects of oil and the rise of renewables that let any state produce energy has forced petrostates to search for new, diverse income streams, empowering their citizens in the process. This effect won’t map to AI – each subsequent model will be more capable than the last one and will likely be controlled by the same few actors. You also can’t “run out” of AI like you can with oil. You could exhaust compute capacity or existing energy, but compute gets cheaper over time and energy is getting greener by the day. We won’t need to transition from advanced AI like they will with oil – once we have it, it’s here to stay.

So what will happen to most regular people, assuming powerful actors follow the default trajectory? Show me the incentives, and I’ll show you the outcome:

  • Companies will be incentivized to fire them, and never hire new ones. They won’t produce anything they can value. For a short time they might rely on them as consumers, but most people-facing companies will fizzle out as their demand base loses economic power.
  • States will be incentivized to decimate public funding. Remember, their revenue base will shift towards other powerful actors. They will derive no value from their labor and are thus incentivized against building things that turn them into productive workers. ROI – capital, power, and resilience – comes from ensuring the AI labs can build better models and the companies using them can do things in the world. Also, the taxes to fund human investment would come in large part from AGI labs. Competition between states means that if any tries to set up a UBI with this tax, their AGI could fall behind other states.
  • Regular people won’t have the resources to support themselves or each other. The vast majority of people will not have the economic power necessary to make any demands. They won’t be able to incentivize resource-controlling actors to invest in them. That means (at best) they’ll struggle to fulfil their basic needs or rely on benevolent charity from powerful actors.

For a while, they might be able to generate some value. Rentier states require some humans to move things in the physical world – someone has to get the oil out of the ground. It could be that humans are paid for manual labor while agents are limited to virtual forms. As robotics improves[6], the need for them will decrease. They won’t be able to participate in the economy because they won’t be able to do anything better, faster, cheaper, or more reliably than their artificial replacers.

In rentier states and colonial states,[7] value is derived primarily from raw materials or physical goods, which are then sold to foreign buyers – usually other states or businesses. A few humans are involved in the raw production or management of this, but most don’t benefit. You should expect a similar scenario here. This leads to an obvious question: who are powerful actors producing anything for?

Powerful actors have goals, so production will strive to achieve them. States want control over territory and companies want to enrich their owners. Individuals who have accrued significant capital might also have goals. Maybe they’ll want to use their newfound power to colonize Mars or excavate the oceans. It could be less historic – plenty of ultra-wealthy people are content to live their lives maximizing their own pleasure. All of them will want to ensure their newfound place in society is secure, and this could require vast amounts of power and resources. Without regular people in the value loop, there is no incentive for spoils to go to them.

Even if humans at the very top of the pyramid remain relevant, the ability for new actors to enter the equation will be frozen. An actor will have power because they had it before the intelligence curse took hold or were well-positioned to capitalize on it as it began.

This sounds a lot like feudal economies. Rudolf makes the comparison aptly:

In a worse case, AI trillionaires have near-unlimited and unchecked power, and there's a permanent aristocracy that was locked in based on how much capital they had at the time of labour-replacing AI. The power disparities between classes might make modern people shiver, much like modern people consider feudal status hierarchies grotesque. But don't worry—much like the feudal underclass mostly accepted their world order due to their culture even without superhumanly persuasive AIs around, the future underclass will too.

To recap, the intelligence curse will create rentier state-style incentives at scale and without their typical restraints. When people are not relevant, powerful actors will by default not invest in people. Without intervention, the default case outcome looks like the worst rentier states – a few extraordinarily wealthy players, mass poverty for the rest, held in a stable equilibrium. A small number of post-AGI elites will control all powerful actors, while everyone else struggles to meet their basic needs.

So people are working on this…right? Right?

The world is waiting on you

Most people are not taking this seriously. When a few friends and I got some of the world’s top experts to agree on the best ways to govern AI by 2030, our economic section asked governments to “consider bold, innovative policy ideas if we arrive at economic conditions that necessitate a more dramatic response.” That’s policy-speak for “we have no idea what to do and need some smart people to think about it.”

We are going to have to break the culture of mass-denial fueled by indefinite optimism[8]. Wishful thinking is dominating the conversation. Some of it is motivated by a sense of self-importance: many people believe that their job is actually super special and automation proof forever, so why should they care?

Two conversations stick out to me:

First, I had a conversation over a year ago with a senior person in AI policy. When I brought up the idea that automation might make people worse off, they considered the possibility of technological replacement totally impossible. Why?

“We’ll have new jobs – maybe everyone will work in AI policy!”

I thought they were kidding. Further discussion proved they weren’t. Everyone thinks their job is safe – even the AI policy people.

Second, in a more recent conversation, I raised the concept of the intelligence curse. I hadn’t fleshed it all out yet, but their response convinced me I needed to. This person, a well-connected person in the AI space, agreed technological displacement was the most likely outcome of AGI, but believed that it would default to utopia.

“We won’t need jobs – we’ll be free to self-actualize. We’ll pursue meaningful goals and write poetry.”

You do not get to utopian poetry writing by having faith that someone else will figure it out. You are not praying to God, you are praying to men more ignorant than you.

The AI safety community thinks they are immune from this because they’ve identified a deeply relevant problem – intent alignment – and are spending all of their energy trying to solve it. I agree with you! Intent alignment must be solved. There’s no way around it. But the safety community often sounds like the person predicting poetry parties. Aligned AGI and superintelligence does not equal utopia.[9] You are merely ensuring the most powerful technology in human history is reliably controllable for the actors that will be most afflicted by the intelligence curse. You can’t just plan for AGI – you have to plan the day after.

For the few who see the intelligence curse for what it is, mass denial has been supplanted by indefinite pessimism.

A day after o3 dropped, I got a text from a software engineer who refused to use Cursor because they didn’t believe it could possibly be better than them:

“Thoughts on o3? This is the first time I am starting to feel a little cooked”

Indefinite pessimism has made us think we’re “cooked” with no way out. “What is your p-doom?” is more common than “what is your solution?” 

If your reaction to the last year of progress has been paralyzed hopelessness, dust yourself off. The world is waiting on you – one of the few who sees what is coming – to do something about it. Hope is a prerequisite.

In my next post, I’ll use that to identify some ways I think we could break the intelligence curse, partially by looking at states that avoided the rentier curse. I’m working on the specifics, but I think solutions will fall into two categories:

  1. Governance solutions. In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out. But our governments aren’t ready.
  2. Innovative solutions. Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant could incentivize powerful actors to continue investing in human capital.

This isn’t just a problem for a blog post. Governments should be forecasting AI capabilities and thinking through solutions to the intelligence curse right now. Think tanks need to start turning out policies designed to get us ready for a post-employment world. AI labs need to be critically examining their own incentives and building better internal governance structures to overcome them. Ambitious young people should start companies trying to design tech that will keep humans economically relevant and spread abundance, and VCs should start funding them. If you are well-positioned to contribute to solving this problem, what are you waiting for?

There are some problems that are impossible to solve – but there are no big problems that aren’t worth giving it everything we’ve got. I am more optimistic than I have ever been because naming the problem gives us something to solve.

Change the incentives, and you can change the outcome. The work starts today.

 

Thank you to Rudolf Laine, Josh Priest, Lysander Mawby, Jacob Pfau, Luca Gandrud, Bilal Chughtai, Nicholas Osaka, Stefan Arama, Joe Pollard, and Caleb Peppiatt for reviewing drafts of this post.

 

  1. ^

     If you disagree, I’d strongly encourage you to read this, this, this, this, and this (and watch this). You should also consider that it is the stated goal of OpenAI, Meta, and Google DeepMind, and it looks like that’s what Anthropic is aiming at. You should also know that the top recommendation from the Congressional US-China Commission in 2024 was for Congress to “establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability.”

  2. ^

     For more on this, see Chapter 7 of this book.

  3. ^

     Why a few rentier states like Oman and Norway become expansive welfare states (and what this means for the intelligence curse) will be the subject of a future post. Spoiler alert: Oman’s model won’t be a solution to the intelligence curse, but Norway’s might be.  

  4. ^

     For other evidence, see here, here, and here.

  5. ^

     If the next thing that pops into your head is “but what about comparative advantage?”, know that this section originally had a 1500 word takedown of that argument which was cut for length. That post is coming soon.

  6. ^

     This is nine months old running on a much worse model than today’s state of the art ones. Again, believe in straight lines.

  7. ^

     One day I’ll write a post about how colonial states function a lot like rentier states. In both of them, extractive institutions generate wealth for a power that isn’t incentivized to care much about the people in their borders. Post-colonial states still suffer because, instead of extracting value for a foreign power, the same institutions are turned into value extraction tools for the domestic political elite.

  8. ^

     Indefinite/Definite Optimism/Pessimism was first defined by Peter Thiel in Zero to One. For a summary of this concept, click here.

  9. ^

     An assumption underpinning this is that we either a) solve intent alignment before making sure that systems are aligned with human values, or b) abandon aligning systems with human values entirely, because powerful actors would rather not have machines that tell them no based on a moral compass the actor doesn’t agree with.

New Comment
25 comments, sorted by Click to highlight new comments since:

AGI looks more like a resource than a technology

That's a choice, though. AGI could, for example, look like a powerful actor in its own right, with its own completely nonhuman drives and priorities, and a total disinterest in being directed in the sort of way you'd normally associate with a "resource".

I agree with you! Intent alignment must be solved.

If by "intent alignment" you mean AGIs or ASIs taking orders from humans, and presumably specifically the humans who "own" them, or are in charge of the "powerful actors", or form some human social elite, then it seems as though your concerns very much argue that that's not the right kind of alignment to be going for.

The killer app for ASI is, and always has been, to have it take over the world and stop humans from screwing things up. That's incompatible with keeping humans in charge, which is what I think you mean by "intent alignment". But it's not necessarily incompatible with behavior that's good for humans. If you're going to take on the (very possibly insoluble) problem of "aligning" AI with something, maybe you should choose "value alignment" or "friendliness" or whatever. Pick a goal where your success doesn't directly cause obvious problems.

That's a choice, though. AGI could, for example, look like a powerful actor in its own right, with its own completely nonhuman drives and priorities, and a total disinterest in being directed in the sort of way you'd normally associate with a "resource".

My claim is that the incentives AGI creates are quite similar to the resource curse, not that it would literally behave like a resource. But:

If by "intent alignment" you mean AGIs or ASIs taking orders from humans, and presumably specifically the humans who "own" them, or are in charge of the "powerful actors", or form some human social elite, then it seems as though your concerns very much argue that that's not the right kind of alignment to be going for.

My default is that powerful actors will do their best to build systems that do what they ask them to do (ie they will not pursue aligning systems with human values).

The field points towards this: alignment efforts are primarily focused on controlling systems. I don't think this is inherently a bad thing, but it results in the incentives I'm concerned about. I've not seen great work on defining human values, creating a value set a system could follow, and forcing them to follow it in a way that couldn't be overridden by its creators. Anthropic's Constitutional AI may be a counter-example.

The incentives point towards this as well. A system that is aligned to refuse efforts that could lead resource/power/capital concentration would be difficult to sell to corporations who are likely to pursue this.

These (here, here, and here) definitions are roughly what I am describing as intent alignment.

But why would the people who are currently in charge of AI labs want to do that, when they could stay in charge and become god-kings instead?

Well, yeah. But there are reasons why they could. Suppose you're them...

  1. Maybe you see a "FOOM" coming soon. You're not God-King yet, so you can't stop it. If you try to slow it down, others, unaligned with you, will just FOOM first. The present state of research gives you two choices for your FOOM: (a) try for friendly AI, or (b) get paperclipped. You assign very low utility to being paperclipped. So you go for friendly AI. Ceteris parabus, your having this choice becomes more likely if research in general is going toward friendliness and less likely if research in general is going toward intent alignment.

  2. Maybe you're afraid of what being God-King would turn you into, or you fear making some embarassingly stupid decision that switches you to the "paperclip" track, or you think having to be God-King would be a drag, or you're morally opposed, or all of the above. Most people will go wrong eventually if given unlimited power, but that doesn't mean they can't stay non-wrong long enough to voluntarily give up that power for whatever reason. I personally would see myself on this track. Unfortunately I suspect that the barriers to being in charge of a "lab" select against it, though. And I think it's also less likely if the prospective "God-King" is actually a group rather than an individual.

  3. Maybe you're forced, or not "in charge" any more, because there's a torches-and-pitchforks-wielding mob or an enlightened democratic government or whatever. It could happen.

I've held this view for years and am even more pessimistic than you :-/

In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out.

Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won't be democracy.

Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant

It seems really hard to think of any examples of such tech.

Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won't be democracy.

Agreed. The rich and powerful could pick off more and more economically irrelevant classes while promising the remaining ones the same won't happen to them, until eventually they can get everything they need from AI and live in enclaves protected by vast drone armies. Pretty bleak, but seems like the default scenario given the current incentives.

It seems really hard to think of any examples of such tech.

I think you would effectively have to build extensions to people's neocortexes in such a way that those extensions cannot ever function on their own. Building AI agents is clearly not that.

Excellent post. This puts into words really well some thoughts that I have had.

I would also like to make an additional point: it seems to me that a lot of people (perhaps less so on LessWrong) hold the view that humanity has somehow “escaped” the process of evolution by natural selection, since we can choose to do a variety of things that our genes do not “want”, such as having non-reproductive sex. This is wrong. Evolution by natural selection is inescapable. When resources are relatively abundant, which is currently true for many Western nations, it can seem that it’s escapable because the selection pressures are relatively low and we can thus afford to spend resources somewhat frivolously. Since resources are not infinitely abundant, over time those selection pressures will increase. Those selection pressures will select out unproductive elements.

This means that even if we managed to get aligment right and form a utopia where everybody gets everything they need or more, they will eventually still be discarded because they cannot produce anything of economic value. In your post, capitalist incentives effectively play the role of natural selection, but even if we converted to a communist utopia, the result would ultimately be the same once selection pressures increase sufficiently, and they will.

Evolution by natural selection is inescapable.

Entities that reproduce with mutation will evolve under selection. I'm not so sure about the "natural" part. If AI takes over and starts breeding humans for long floppy ears, is that selection natural?

Bear in mind that in that scenario the AIs may not choose to let the humans breed to anywhere near the limits of the available resources no matter how good their ears are. If there's resource competition, it may be among the AIs themselves (assuming there's more than one AI running to begin with).

But there won't necessarily be more than one AI, at least not in the sense of multiple entities that may be pursuing different goals or reproducing independently. And even if there are, they won't necessarily reproduce by copying with mutation, or at least not with mutation that's not totally under their control with all the implications understood in advance. They may very well be able prevent evolution from taking hold among themselves. Evolution is optional for them. So you can't be sure that they'll expand to the limits of the available resources.

Entities that reproduce with mutation will evolve under selection. I'm not so sure about the "natural" part. If AI takes over and starts breeding humans for long floppy ears, is that selection natural?

In some sense, all selection is natural, since everything is part of nature, but an AI that breeds humans for some trait can reasonably be called artificial selection (and mesa-optimization). If such a breeding program happened to allow the system to survive, nature selects for it. If not, it tautologically doesn’t. In any case, natural selection still applies.

But there won't necessarily be more than one AI, at least not in the sense of multiple entities that may be pursuing different goals or reproducing independently. And even if there are, they won't necessarily reproduce by copying with mutation, or at least not with mutation that's not totally under their control with all the implications understood in advance. They may very well be able prevent evolution from taking hold among themselves. Evolution is optional for them. So you can't be sure that they'll expand to the limits of the available resources.

In a chaotic and unpredictable universe such as ours, survival is virtually impossible without differential adapation and not guaranteed even with it. (See my reply to lukedrago below.)

Glad you enjoyed it! 

Could you elaborate on your last paragraph? Presuming a state overrides its economic incentives (ie establishes a robust post-AGI welfare system), I'd like to see how you think the selection pressures would take hold.

For what it's worth, I don't think "utopian communism" and/or a world without human agency are good. I concur with Rudolf entirely here -- those outcomes miss agency what has so far been an core part of the human experience. I want dynamism to exist, though I'm still working on if/how I think we could achieve that. I'll save that for a future post.

I don't know how selection pressures would take hold exactly, but it seems to me that in order to prevent selection pressures, there would have to be complete and indefinite control over the environment. This is not possible because the universe is largely computationally irreducible and chaotic. Eventually, something surprising will occur which an existing system will not survive. Diverse ecosystems are robust to this to some extent, but that requires competition, which in turn creates selection pressures.

I encourage you to change the title of the post to "The Intelligence Resource Curse" so that, in the very name, it echoes the well known concept of "The Resource Curse".

Lots of people might only learn about "the resource curse" from being exposed to "the AI-as-capital-investment version of it" as the AI-version-of-it becomes politically salient due to AI overturning almost literally everything that everyone has been relying on in the economy and ecology of Earth over the next 10 years.

Many of those people will be able to bounce off of the concept the first time they hear it if they only hear "The Intelligence Curse" because it will pattern match to something they think they already understand: the way that smart people (if they go past a certain amount of smartness) seem to be cursed to unhappiness and failure because they are surrounded by morons they can barely get along with.

The two issues that "The Intelligence Curse" could naively be a name for are distinguished from each other if you tack on the two extra syllables and regularly say "The Intelligence Resource Curse" instead :-)

I appreciate this concern, but I disagree. An incognito google search of "intelligence curse" didn't yield anything using this phrase on the front page except for this LessWrong post. Adding quotes around it or searching for the full phrase ("the intelligence curse") showed this post as the first result. 

A quick twitter search in recent shows the phrase "the intelligence curse" before this post:

  • In 24 tweets in total
  • With the most recent tweet on Dec 21, 2024
  • Before that, in a tweet from August 30, 2023
  • In 10 tweets since 2020
  • And all other mentions pre-2015

In short, I don't think this is a common phrase and expect that this would be the most understood usage. 

I agree that this could be a popular phrase because of future political salience. I expect that the idea that being intelligence is a curse would not be confused with this anymore than saying having resources are a curse (referring to wealthy people being unhappy) confuses people with the resource curse.

I think "the intelligence resource curse" would be hard for people to remember. I'm open to considering different names that are catchy or easy to remember.

I agree that the intelligence curse isn't a common phrase. But I think the intelligence resource curse is more memorable because it encapsulates the whole idea.

Maybe I'm missing something important, but I think AGI won't be much like a resource and also I don't think we'll see rentrier entities. I'm not saying it will be better though.

The key thing about oil or coal is that it's already there, you roughly know how much it's worth and this value won't change much whatever you do (or don't do). With AI this is different, because all the time you'll have many competitors trying to create a new AI that is either stronger or cheaper than yours. It's not that the deeper you dig the more & better oil you get.

So you can't really become a rentier - you must spend all resources on charging forward, because if you don't, you'll be left behind forever. If we assume there's no single entity that takes lead and restricts competition forever, this might lead to some version of ascended economy. That's probably even worse for an average human: AGI rentrier won't care about you, but also won't care much about your small field full of potatoes that will hopefully let you survive one winter more.

Once again, a post here has put into well researched, organized words what I have tried to express verbally wtih friends and family. You have my most sincere gratitude for that.

I've been sending this around to aforementioned friends and family and I am again surprised by how receptive people are. I really do think that our best course of action is to talk to & prime people for the roller coaster to come - massive public sentiment against AI (e.g Butlerian Jihad) is our best bet.

In my post, A Path to Human Autonomy I argue that the only stable equilibrium in the long term (decades) is for at least some humans to undergo intelligence augmentation. Furthermore, the augmentation trajectory leads inevitably towards fully substrate-independent digital people.

The are many paths we might take to get there, but I'm pretty sure it is that or civilizational collapse we are facing before the end of the century.

Superpowerful AI singleton guarding humanity could save us from destroying ourselves, but if we end human progress there, then this is also ending human autonomy.

I think if you really spend some time imagining a world where the AGI is smarter and more physically powerful than any human, and gets smarter and more powerful every year... You realize that "better democratic control of intent-aligned AGI" is a temporary solution at best.

I claim that anything that's undergone that much intelligence augmentation can't reasonably be called "human".

Perhaps "human autonomy" isn't the right goal?

[Deleted a previous comment that misunderstood this as a reply to mine above]

Ok. I'd agree to "transhuman". I will say that that seems meaningfully different to me than a very alien AI, with very different values, going rogue and colonizing the lightcone.

Edit: I think it would make a lot of sense if Earth were considered sort of a nature preserve or like, tribal reservation, for the "vanilla humans", and space was the domain of transhumans, digital people, and AI.

Thank you for this post. You very elegantly laid out a scenario that has been swirling in my head, and I think I updated the probabilities of this scenario as more likely after reading your post.

Your post also strengthen my desire to pivot from finance to policy. I need to figure out how to do this. I find it paramount that more people (both in power and in the general populace) understand the possibility of massive economic disruption.

I look forward to the follow-ups of this post.

I think your point has some merit in the world where AI is useful and intelligent enough to overcome the sticky social pressure to employ humans but hasn't killed us all yet. That said, I think AI will most likely kill us all in that 1-5 year window after becoming cheaper, faster, and more reliable than humans at most economic activity, and I think you have to convince me that I'm wrong about that before I start worrying about humans not hiring me because AI is smarter than I am. However I want to complain about this particular point you made because I don't think it's literally true:

Powerful actors don’t care about you out of the goodness of their heart.

One of the reasons why AI alignment is harder than people think, is because they say stuff like this and think AI doesn't care about people in the way that powerful actors don't care about people. This is generally not true. You cannot in general pay a legislator $400 to kill a person who pays no taxes and doesn't vote. That is impressive when you think about it. You can argue that they fear reputational damages or going to prison, but I truly think that if you took away the consequences, $400 would not be enough money to make most legislators overcome their distaste for killing another human being with their bare hands. Some of them really truly want to make society better, even if they aren't very effective at it. Call it noblesse oblige if you want, but it's in their utility function to do things which aren't just give the state more money or gain more personal power. The people who steer large organizations have goodness in their hearts, however little, and thus the organizations they steer do too, even if only a little. Moloch hasn't won yet. America the state is willing to let a lot of elderly people rot, but America wasn't in fact willing to let Covid rip, even though that might have stopped the collapse of many tax-generating businesses, and most people who generate taxes would have survived. I don't think that's because the elderly people who overwhelmingly would have been killed by that are an important voting constituency for the party which pushed hardest for lockdown.

AI which knows it won't get caught and literally only cares about tax revenue and power will absolutely kill anyone who isn't useful to them for $400. That's $399 worth of power they didn't have before if killing someone costs $1 of attention. I don't particularly want to live in a world where 1% percent of people are very wealthy and everyone else is dying of poverty because they've been replaced by AI, but that's a better world than the one I expect where literally every human is killed because, for example, those so-called "reliable" AIs doing all of the work humans used to do as of yesterday liked paperclips more than we thought and start making them today.

You cannot in general pay a legislator $400 to kill a person who pays no taxes and doesn't vote.

Indeed not directly, but when the inferential distance increases it quickly becomes more palatable. For example, most people would rather buy a $5 T-shirt that was made by a child for starvation wages on the other side of the world, instead of a $100 T-shirt made locally by someone who can afford to buy a house with their salary. And many of those same T-shirt buyers would bury their head in the sand when made aware of such a fact.

If I can tell an AI to increase profits, incidentally causing the AI to ultimately kill a bunch of people, I can at least claim a clean conscience by saying that wasn't what I intended, even though it happened just the same.

In practice, legislators do this sort of thing routinely. They pass legislation that causes harm—sometimes a lot of harm—and sleep soundly.

I agree. To add an example: the US government's 2021 expanded child tax credit lifted 3.7 million children out of poverty, a near 50% reduction. Moreover, according to the NBER's initial assessment: "First, payments strongly reduced food insufficiency: the initial payments led to a 7.5 percentage point (25 percent) decline in food insufficiency among low-income households with children. Second, the effects on food insufficiency are concentrated among families with 2019 pre-tax incomes below $35,000". 

Despite this, Congress failed to renew the program. Predictably, child poverty spiked the following year. I don't have an estimate for how many lives this cost, but it's greater than zero.

I have been thinking a lot about a similar threat model. It seems like we really ought to spend more resources thinking about economic consequences of advanced AI.

Governance solutions. In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out. But our governments aren’t ready.

Innovative solutions. Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant could incentivize powerful actors to continue investing in human capital.

Really looking forward to reading your proposed solutions to the intelligence curse.

energy is getting greener by the day.

source?