Review

A few days ago I published this post on the risks of powerful transformative AGI (by which I meant AGI that takes off fast and pretty much rules the world in no time), even if aligned. Among the comments there was one by Paul Christiano which I think was very interesting, but also focused on a different scenario, one of slower take off in which AGI stays with us as a regular part of our economy for a bit longer. This post is an elaboration of the answer I gave there, because it brought to my mind a different kind of risk.

It's a common in rebuttals to pessimism about AGI to compare it to other past technologies, and how eventually they all ended up boosting productivity and thus raising the average human welfare in the long run (though I would also suggest that we do not completely ignore the short run: after all, we're most likely to live through it. I don't just want the destination to be nice, I want the trip to be reasonably safe!). I worry however that carrying this way of thinking to AGI might be a case of a critical error of extrapolation - applying knowledge that worked in a certain domain to a different domain in which some very critical assumptions on which that knowledge relied aren't true any more.

Specifically, when one thinks of any technology developed during or after the industrial revolution, one thinks of a capitalist, free-market economy. In such an economy, there are people who mostly own the capital (the land, the factories, and any other productive infrastructure) and there are people who mostly work for the former, putting the capital to use so it can actually produce wealth. The capital acts as a force multiplier which makes the labour of a single human be worth tens, hundreds, thousands of times what it would have been in a pre-industrial era; but ultimately, it is still a multiplier. A thousand times zero is zero: the worker is still an essential ingredient. The question of how this surplus in productivity is to be split fairly between interests that reward the owner of the capital for the risk they take and salary for the worker who actually put in the labour has been a... somewhat contentious issue throughout the last two centuries, but all capitalist economies exist in some kind of equilibrium that is satisfactory enough to at least not let the social fabric straight up unravel itself. Mostly, both groups need each other, and not just that; workers being also consumers, their participation in the economy is vital to make the huge gains of industrial productivity even worth anything at all, and them having higher living standards (including, crucially, good literacy and education) is essential to their ability to actually contribute to systems that have been growing more and more cognitively complex by the year. These forces are an essential part of what propelled our society to its current degree of general prosperity through the 19th, 20th and now 21st century.

But this miracle is not born of disinterested generosity. Rather, it has been achieved through a lot of strife, and is an equilibrium between different forms of self-interest. The whole sleight of (invisible) hand with which free market capitalism makes people richer is that making other people well off is the best way to make yourself very well off. To quote Adam Smith himself, "it is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest". In AI language, one can think of a capitalist society as a sort of collective superintelligence whose terminal goal is everyone's selfish interest to personally live better, roughly weighed by how much capital (and thus steering power over the economy) they control, but structured in such a way that its instrumental goal is then to generate overall wealth and well-being all around. Not that these two goals are always perfectly aligned: if a factory can get away with making its operation cheaper by polluting a river it often will (or it will be punished by the competition that does for holding back). But as long as the rules are well-designed, the coupling is, if not perfect, at least satisfactory.

AGI risks completely breaking that. AGI does not just empower workers to be more productive, it replaces them, and in doing so, it could decouple entirely those two goals - one that owns capital could achieve personal prosperity without any need for collective one. Consider a scenario in which AGI and human-equivalent robotics are developed and end up owned (via e.g. controlling exclusively the infrastructure that runs it, and being closed source) by a group of, say, 10,000 people overall who have some share in this automation capital. If these people have exclusive access to it, a perfectly functional equilibrium is "they trade among peers goods produced by their automated workers and leave everyone else to fend for themselves". Sam Altman in his Moore's Law for Everything manifesto suggests a scheme of UBI funded by a tax on capital which he claims would redistribute the profits of AGI to everyone. But that is essentially relying on the benevolence of the butcher for our dinner. It's possible that some companies might indeed do that, just like some companies today make genuine efforts to pay their workers more fairly, or be more environmentally conscious, above and beyond what simply benefits them in terms of PR. But as long as the incentives aren't in favour of that, they will be the exception, not the rule. If AGI can do anything that a human can, possibly better than 90% of real human workers, then there will be no leverage anyone who doesn't control it can hold over those who do. Strikes are pointless, because you can't withdraw labour no one wants. Riots and revolts are pointless, because no robot army will ever hesitate to shoot you and turn against its master out of sympathy. Every single rule we think we know about how advances in productivity benefit the rest of society would break down. 

(I guess Sam Altman's proposal might work out if his full plan was to become the only capitalist in the world, and then to become immortal so that no one else ever has to inherit his throne, and then to hold himself to some kind of binding vow to never abandon those values he committed to. I think it says a lot about the utter insanity of the situation that I can't rule that out completely

Now, to steelman the possible criticisms to my argument and end the post on a somewhat more positive note, here's a few possible ways I can think of to escape the trap:

  • make AGI autonomous and distributed, so no one has single control over it: this solves the risks of centralised control but of course it creates about a thousand others. Still, if it was possible to align and deploy safely such an AGI, this would probably be the surest way to avoid the decoupling risk;
  • keep AGI cognitive, no robotics: this keeps humans in the loop for some pretty fundamental stuff without which nothing else is possible (food, minerals, steel and so on). Honestly, though, not sure why if we weren't able to stop ourselves from creating AGI we'd suddenly draw the line at robotics. The debates would be the same all over again. It would also be at least ironic if instead of freeing humanity from toil, automation ended up forcing it back onto the fields and into the mines as the best possible way to stay relevant. Besides, if we stay on Earth, we can't all keep extracting resources at a much greater pace that we are already: our support systems are strained as they are. I suppose recycling plants would have a boom;
  • keep humans relevant for alignment: even if AGI gets creative enough to not need new human-generated material for training, it's reasonable to expect it might need a constant stream of human-values-laden datasets to keep it in line with our expectations. Upvoting and downvoting short statements to RLHF the models that have taken all of the fun jobs may not be the most glamorous future, but it's a living. More generally, humans could be the "idea guys" who organise the work of AGIs in new enterprises, but I don't know if you can build a sustainable society in which everyone is essentially a start-up founder with robotic workers;
  • keep AGI non-agentic, and have humans always direct its actions: this one is a stronger version of the previous one. It's better IMO since it also leaves value-laden choices firmly in human hands, but it still falls in the category of voluntarily crippling our tech and keeping it that way. I still think it's the best shot we have, but I admit it's hard to imagine how to make sure that is a stable situation;
  • make sure everyone has a veto on AGI use: this is a bit of a stricter variation on Altman's plan, borrowing something from the distributed idea from before. He suggests pooling equity shares of AGI capital into a fund from which all Americans draw a basic income (though the risk here is this doesn't cover what happens to non Americans when American companies capture most of the value of their jobs too). The problem I have with that is that ultimately shares are just pieces of paper. If all the power rests with AGIs, and if access to these AGIs is kept by a handful of people, then those people effectively hold the power, and the rest is just a pretty façade that can fall if poked for long enough. For the shares plan to work consistently, everyone needs to hold a literal share of control over the use of the AGI itself. For example, a piece of a cryptographic key necessary to encode every order to it. I'm not sure how you could make this work (you both need to make sure that no single individual can straight up freeze the country, but also that thousands or millions of individuals in concerted action could hold some real power), but if it was applied from the very beginning, it would hopefully hold in a stable manner for a long time. I'd still worry however about the international aspect, since this probably would only be doable within a single country.

None of these ideas strikes me as fully satisfying, but I tried. I'd like to hear any criticism or other ideas, especially if better. If there aren't any realistic paths outside of the trap, I think it's necessary to consider whether the utopian visions of a post-scarcity world aren't mostly wishful thinking, and the reality risks being a lot less pleasant.

New Comment
30 comments, sorted by Click to highlight new comments since:

In the long run it seems pretty clear labor won't have any real economic value. It seems like the easiest way for everyone to benefit is for states to either own capital themselves or tax capital, and use the proceeds to benefit citizens. (You could also have sufficiently broad capital ownership, but that seems like a heavier lift from here.)

I'm not sure why you call this "relying on the benevolence of the butcher." Typically states collect taxes using the threat of force, not by relying on companies to be benevolent. (If states own capital then they aren't even using the threat of force.)

Maybe you mean the citizens are relying on the benevolence of the state? But in a democracy they do retain formal power via voting, which is not really benevolence. Governance is harder in a world without revolution or coups as a release valve,  but I'm not sure it's qualitatively different from the modern situation. In some theoretical sense the US military could say "screw the voters" and just kill them and take their stuff, and that would indeed become easier if a bunch of humans in the military didn't have to go along with the plan. But it seems like the core issue here is transferring currently-implicit checks and balances to a world with aligned AI. I don't think this requires crippling the tech at all, just being careful about checks and balances so that an army which nominally works on behalf of the voters actually does so.

Maybe you mean that companies that make AI systems and robots could in aggregate just overthrow the government rather than pay taxes? (That sounds like what you mean by "leave everyone else to fend for themselves," though presumably they also have to steal or own all the natural resources or else the rest of the world would just build AGI later, so I am thinking of this more as a violent takeover rather than peaceful secession.) That's true in some sense, but it seems fundamentally similar to the modern situation---US defense contractors could in some theoretical sense supply a paramilitary and use their monopoly to overthrow the US government, but that's not even close to being feasible in practice. Most of the fundamental dynamics that prevent strong paramilitaries today seem like they apply just as well. There are plenty of mechanisms other than "cryptographic veto" by which we can try to build a military that is effectively controlled by the civilian government.

It seems to me like there are interesting challenges in the world with AI:

  1. The current way we tax capital gains is both highly inefficient and unlikely to generate much revenue. I think there are much better options, but tax policy seems unlikely to change enough to handle AI until it becomes absolutely necessary. If we fail to solve this then median incomes could fall very far below average income.
  2. Right now the possibility of revolutions or coups seems like an important sanity check on political systems, and involving lots of humans is an important part of how we do checks and balances. Aligned AI would greatly increase the importance of formal chains of command and formal systems of governance, which might require more robust formal checks and balances.
  3. It's qualitatively harder for militaries to verify AI products than physical weapons. Absent alignment this seems like a dealbreaker, since militaries can't use AI without a risk of coup, but even with alignment it is a challenging institutional problem.
  4. Part of how we prevent violent revolutions is that they require a lot of humans to break the law, and it may be easier to coordinate large-scale law-breaking with AI. This seems like a law enforcement problem that we will need to confront for a variety of reasons.

I don't think it's right to think of this as "no paths out of the trap;" more like "there are a lot of ways society would need to adapt to AI in order to achieve outcomes that would be broadly considered desirable."

[-]lc178

it seems fundamentally similar to the modern situation---US defense contractors could in some theoretical sense supply a paramilitary and use their monopoly to overthrow the US government, but that's not even close to being feasible in practice.

Our national security infrastructure relies on the detail that, in order for PMCs or anyone else to create those paramilitaries and overthrow the government with them, they would have to organize lots of different people, in secret. An AI army doesn't snitch, and so a single person in full control of a AI military would be able to seize power Myanmar style without worrying about either the FBI finding out beforehand or whether or not the public goes along. That's the key difference.

[-]dr_s135

This. In a broader sense, all our current social structures rely on the notion that no man can be an island. No matter how many weapons and tools you can accumulate, if it's just you and you can't persuade anyone to work for you, all you have is a bunch of scrap metal. Computers somewhat change that, as do nuclear weapons, but there are limits to those things too still. Social bonds, deals, compromises, exchanges and contracts remain fundamental. They may be skewed sometimes by power asymmetries, but they can't be completely done without.

AGI and robotics together would allow you to do without. All you need is to be personally keyed in to the AGI (have some kind of password or key so that it will only accept your orders, for example), and suddenly you can wield the strength and intelligence of millions as if it were your own. I don't think the transformative effect of that can be understated. Even if we kept the current structures for a while, they'd merely be window dressing. They would not be necessary unless we find a way to bake that necessity in, and if we don't, then they will in time fall (unless the actual ASI takeover comes first, I guess).

All you need is to be personally keyed in to the AGI (have some kind of password or key so that it will only accept your orders, for example), and suddenly you can wield the strength and intelligence of millions as if it were your own. I don't think the transformative effect of that can be understated.

Well until the AGI with 'the strength and intelligence of millions' overthrows their nominal 'owner'. Which I imagine would probably be within a short interval after  being 'keyed in'.

Yeah, the entire premise of this post was a world in which for whatever reason AGI caps at near human or even slightly subhuman level. Good enough to be a controllable worker but not to straight up outwit the entirety of the human species. If you get powerful ASI and an intelligence explosion, then anything goes.

I think it's easier to have a coup or rebellion in a world where you don't have to coordinate a lot of people. (I listed that as my change #4, I think it's very important though there are other more salient short-term consequences for law enforcement.)

But I don't think this is the only dynamic that makes a revolution hard. For example, governments have the right and motivation to prevent rich people from building large automated armies that could be used to take over.

I agree that right now those efforts rely a lot on the difficulty of coordinating a lot of people. But I suspect that even today if Elon Musk was building thousands of automated tanks for his own purposes the federal government would become involved. And if the defense establishment actually thought it was possible that Elon Musk's automated army would take over the country, then the level of scrutiny would be much higher.

I'm not sure exactly where the disagreement is---do you think the defense establishment wouldn't realize the possibility of an automated paramilitary? That they would be unable to monitor well enough to notice, or have the political power to impose safeguards?

Aligned AI makes it much easier to build armies that report to a single person, but also make it much easier to ensure your AI follows the law.

My general thinking is just "once you set up a set of economic incentives, the world runs downhill from there to optimize for those". What form does that specifically take depends on initial conditions and a lot of contingent details, but I'm not too worried about that if the overall shape of the result is similar.

So to entertain your scenario, suppose you had AGI, and immediately the US military started forming up their own robot army with it, keyed in to the head of state. In this scenario, thanks to securing it early on, the state also becomes one of the big players (though they still likely depend on companies for assistance and maintenance).

The problem isn't who, specifically, the big players are. The problem is that most people won't be part of them.

In the end, corporations extracting resources with purely robotic workforces, corporations making luxuries with purely robotic workforces, a state maintaining a monopoly of violence with a purely robotic army - none of these have any need or use for the former working class. They'll just be hangers on. You can give them an UBI with which they then pay your products so they can keep on living, but what's the point? The UBI comes from your money, might as well give them the products anyway. The productive forces are solidly in the hands of a few, and they have absolute control over them. Everyone else is practically useless. Neither the state nor the corporations have any need for them, nor reason to fear them. Someone with zero leverage will inevitably become irrelevant, eventually. I suppose you could postulate this not happening if AGI manages to maintain such a spectacular growth rate that no one's individual greed can possibly absorb it all, and it just has to trickle down out of sheer abundance. Or maybe if people started colonising space, and thus a few human colonists had to be sent out with each expedition as supervisors, providing a valve and something for people to actually do that puts them in the position of being able to fend for themselves autonomously.

What exactly is the "economic incentive" that keeps the capitalist in power in the modern world, given that all they have is a piece of paper saying that they "own" the factory or the farm? It seems like you could make an isomorphic argument for an inevitable proletarian revolution, and in fact I'd find it more intuitively persuasive than what you are saying here. But in fact it's easy to have systems of power which are perpetuated despite being wildly out of line with the real physical importance of each faction.

(Perhaps your analogous story would be that capitalists with legal ownership are mostly disempowered in the modern world, and it's managers and people with relevant expertise and understanding who inevitably end up with the power? I think there's something to that, but nevertheless the capitalists do have a lot of formal control and it's not obviously dwindling.)

I also don't really think it's clear that AGI means that capitalists are the only folks who matter in the state of anarchy. Instead it seems like their stuff would just get taken from them. In fact there just doesn't seem to be any economic incentives at all of the kind of you seem to be gesturing at, any humans are just as economically productive as any others, so the entire game is the self-perpetuating system of power where people who call the shots at time T try to make sure that they keep calling the shots at time T+1. That's a complicated dynamic and it's not clear where it goes, but I'm skeptical about this methodology for confidently forecasting it.

And finally, this is all on top of the novel situation that democratic states are nominally responsible to their voters, and that AI makes it radically easier to translate this kind of de jure control into de facto control (by reducing scope for discretion by human agents and generally making it possible to build more robust institutions).

I think the perspective you are expressing here is quite common and I'm not fully understanding or grappling with it. I expect it would be a longer project for me to really understand it (or for you or someone else to really lay it out clearly), which is maybe worth doing at some point but probably not here and probably not by me in particular given that it's somewhat separate from my day job.

[-]dr_s7-2

What exactly is the "economic incentive" that keeps the capitalist in power in the modern world, given that all they have is a piece of paper saying that they "own" the factory or the farm? It seems like you could make an isomorphic argument for an inevitable proletarian revolution, and in fact I'd find it more intuitively persuasive than what you are saying here.

I mentioned this in another comment, but I think there is a major difference. Consider the risk calculation here. The modern working class American might feel like they have a rough deal in terms of housing or healthcare, but overall, they have on average a baseline of material security that is still fairly decent. Meanwhile, what would revolution offer? Huge personal risk to life, huge risk of simply blowing up everything, and at the other end, maybe somewhat better material conditions, or possibly another USSR-like totalitarian nightmare. Like, sure, propaganda in the Cold War really laid it thick on the "communism is bad" notion, but communism really did no favours to itself either. And all of that can only happen if you manage to solve a really difficult coordination problem with a lot of other people who may want different things than you to begin with, because if you don't, it's just certain death anyway. So that risk calculus is pretty obvious. To attempt revolution in these conditions you need to be either ridiculously confident in your victory or ridiculously close to starvation.

Meanwhile, an elite that has control over AGI needs nothing of that. Not only do they not risk almost anything personally (they have robots to do the dirty work for them), not only do they face no, or very little, coordination problems (the robots are all loyal, though they might need to ally with some of their peers), but they don't even need to use violence directly, as they are in a dominant position to begin with, and already hold control over the AGI infrastructure and source code. All they need is lobbying, regulatory capture, and regular economics to slowly shift the situation. This would happen naturally, because suppose you are Robo-Capitalist who produces a lot of A. You can either pay taxes which are used to give UBI to a lot of citizens who then give you your own money back to get some of A, or you can give all of your A to other Robo-Capitalists who produce B, C and D, thus getting exclusive access to their goods, which you need, and avoiding the completely wasteful sink of giving some stuff to poor people. The state also needs to care about your opinions (your A is necessary to maintain its own AGI infrastructure, or it's just some luxury that politicians enjoy a lot), but not so much about those of the people (if they get uppity the robot soldiers will put them in line anyway), so it is obviously more inclined to indulge corporate interests (it already is in our present day for similar reasons, AGI merely makes things even more extreme). If things get so bad that some people straight up revolt, then you have legitimacy and can claim the moral high ground as you repress them. No risk of your own soldiers turning on you and joining them. Non-capitalists simply go the way of Native Americans: divided and conquered, pushed into enclaves, starved of resources, and decried as violent savages and brutally repressed with superior technology whenever they push back. All of this absolutely risk-free for the elites. It's not even a choice: it's just the natural outcome of incentives, unless some stopper is put to them.

And finally, this is all on top of the novel situation that democratic states are nominally responsible to their voters, and that AI makes it radically easier to translate this kind of de jure control into de facto control (by reducing scope for discretion by human agents and generally making it possible to build more robust institutions).

This is more of a scenario in which the AGI-powered state becomes totalitarian. Possible as well, but not the trajectory I'd expect from a starting point like the US. It would be more like China. From the USA and similar I'd expect the formation of a state-industrial complex golem that becomes more and more self-contained, while everyone else slowly whittles into irrelevance and eventually dies off or falls into some awful extremely cheap living standards (e.g. wireheaded into a pod).

[-]lc31

PMCs are a bad example. My primary concern is not Elon Musk engineering a takeover so much as a clique of military leaders, or perhaps just democracies' heads of state, taking power using a government-controlled army that has already been automated, probably by a previous administration that wasn't thinking too hard about safeguards. That's why I bring up the example of Burma.

An unlikely but representative story of how this happens might be: branches of the U.S. military get automated over the next 10 years probably as AGI contributes to robotics research, "other countries are doing it and we need to stay competitive", etc. Generals demand and are given broad control over large amounts of forces. A 'Trump' (maybe a democrat Trump, who knows) is elected, and makes highly political Natsec appointments. 'Trump' isn't re-elected. He comes up with some argument about how there was widespread voter fraud in Maine and they need a new election, and his faction makes a split decision to launch a coup on that basis. There's a civil war, and the 'Trump'ists win because much of the command structure of the military has been automated at this point, rebels can't fight drones, and they really only need a few loyalists to occupy important territory.

I don't think this is likely to happen to any one country, but when you remove the safeguard of popular revolt and the ability of low level personnel to object, and remove the ability of police agencies to build a case quickly enough, it starts to become concerning that this might happen over the next ~15 years in one or two countries.

[-]dr_s115

Maybe you mean that companies that make AI systems and robots could in aggregate just overthrow the government rather than pay taxes?

Something along those lines, but honestly I'd expect it to happen more gradually. My problem is that right now, the current situation rests on the fact that everyone involved needs everyone else, to a point. We've arrived at this arrangement through a lot of turbulent history and conflict. Ultimately, for example, a state can't just... kill the vast majority of its population. It would collapse. That creates a need for even the worst tyrannies to somewhat balance their excesses, if they're not going completely insane and essentially committing suicide as a polity (this does sometimes happen). Similarly, companies can only get away with so much mistreatment of workers or pollution before either competition, boycotts, or the long arm of the law (backed by politicians who need their constituents' votes) get them.

But all this balance is the product of an equilibrium of mutual need. Remove the need, the institutions might survive - for a while, out of inertia. But I don't think it would be a stable situation. Gradually you'd have everyone realise how they can get away with stuff that they couldn't get away with before and now suffer no consequences for it, or be able to ignore the consequences.

Similarly, there's no real reason a king ought to have power. The people could just not listen to him, or execute him. And yet...

If you want to describe a monarch as "relying on the benevolence of the butcher" then I guess sure, I see what you mean. But I'm not yet convinced that this is a helpful frame on how power works or a good way to make forecasts.

A democracy, even with zero value for labor, seems much more stable than historical monarchies or dictatorships. There are fewer plausibly legitimate challengers (and less room for a revolt), there is a better mechanism for handling succession disputes. AI also seems likely to generally increases the stability of formal governance (one of the big things people complain about!).

Another way of putting it is that capitalists are also relying on the benevolence of the butcher, at least in the world of today. Their capital doesn't physically empower them, 99.9% of what they have is title and the expectation that law enforcement will settle disputes in their favor (and that they can pay security, who again has no real reason to listen to them beyond the reason they would listen to a king). Aligned AI systems may increase the importance of formal power, since you can build machines that reliably do what their designer intended rather than relying on humans to do what they said they'd do. But I don't think that asymmetrically favors the capitalist (who has on-paper control of their assets) over the government (who has on paper control of the military and the power to tax).

Similarly, there's no real reason a king ought to have power. The people could just not listen to him, or execute him. And yet...

Feudal systems were built on trust. The King had legitimacy with his Lords, who held him as a shared point of reference, someone who would mediate and maintain balance between them. The King had to earn and keep that trust. Kings were ousted or executed when they betrayed that trust. Like, all the time. First that come to my mind would be John Lackland, Charles I, and obviously, most famously, Louis XVI. Feudalism pretty much crumbled once material conditions made it so that it wasn't necessary nor functional any more, and with it went most kings, or they had to find ways to survive in the new order by changing their role into that of figureheads.

I'm saying building AGI would make the current capitalist democracy obsolete the way industrialization and firearms made feudalism obsolete, and I'm saying the system afterwards wouldn't be as nice as what we have now.

Another way of putting it is that capitalists are also relying on the benevolence of the butcher, at least in the world of today. Their capital doesn't physically empower them, 99.9% of what they have is title and the expectation that law enforcement will settle disputes in their favor (and that they can pay security, who again has no real reason to listen to them beyond the reason they would listen to a king).

I think again, the problem here is a balance of risks and trust. No one wants to rock the boat too much, even if rocking the boat might end up benefitting them, because it might also not. It's why most anti-capitalists who keep pining for a popular revolution are kind of deluding themselves: people wouldn't just risk their lives while having relative material security for the sake of a possible improvement in their conditions that might also actually just turn out to be a totalitarian nightmare instead. It's a stupid bet no one would take. Changing conditions would change the risks, and thus the optimal choice. States wouldn't go against corporations, and corporations wouldn't go against states, if both are mutually dependent from each other. But both would absolutely screw over the common people completely if they had absolutely nothing to fear or lose from it, which is something that AGI could really cement.

I think if you want to argue that this is a trap with no obvious way out, such that utopian visions are wishful thinking, you'll probably need a more sophisticated version of the political analysis. I don't currently think the fact that e.g. labor's share of income is 60% rather than 0% is the primary reason US democracy doesn't collapse.

I believe that AGI will have lots of weird effects on the world, just not this particular one. (Also that US democracy is reasonably likely to collapse at some point, just not for this particular reason or in this particular way.)

When we will have AGI, humanity will be collectively a "king" of sorts. I.e. a species that for some reason rules other, strictly superior species. So, it would really help if we'd not have "depose the king" as a strong convergent goal.

I, personally, see the main reason of kings and dictators keeping the power is that kiling/deposing them would lead to collapse of the established order and a new struggle for the power between different parties, with likely worse result for all involved than just letting the king rule.

So, if we will have AIs as many separate sufficiently aligned agent, instead of one "God AI", then keeping humanity on top will not only match their alignment programming, but also is a guarantie of stability, with alternative being a total AI-vs-AI war.

Ultimately, for example, a state can't just... kill the vast majority of its population. It would collapse. That creates a need for even the worst tyrannies to somewhat balance their excesses

Unless the economy of the tyranny is mostly based on extracting and selling natural resources, in which case everyone else can be killed without much impact on the economy.

Yeah, there are rather degenerate cases I guess. I was thinking of modern industrialised states with complex economies. Even feudal states with mostly near-subsistence level agriculture peasantry could take a lot of population loss without suffering much (the Black Death depopulated Europe to an insane degree, but society remained fairly functional), but in that case, what was missing was the capability to actually carry out slaughter on an industrial scale. Still, peasant revolt repression could get fairly bloody, and eventually as technology improved some really destructive wars were fought (e.g. the Thirty Years' War).

In the long run it seems pretty clear labor won't have any real economic value

 

I'd love to see a full post on this. It's one of those statements that rings true since it taps into the underlying trend (at least in the US) where the labor share of GDP has been declining. But *check notes* that was from 65% to 60% and had some upstreaks in there. So it's also one of those statements that, upon cognitive reflection, also has a lot of ways to end up false: in an economy with labor crowded out by capital, what does the poor class have to offer the capitalists that would provide the basis for a positive return on their investment (or are they...benevolent butchers in the scenario)? Also, this dystopia just comes about without any attempts to regulate the business environment in a way that makes the use of labor more attractive? Like I said, I'd love to see the case for this spelled out in a way that allows for a meaningful debate.

As you can tell from my internal debate above, I agree with the other points - humans have a long history of voluntarily crippling our technology or at least adapting to/with it.

In the long run I think it's extremely likely you can make machines that can do anything a human can do, at well below human subsistence prices. That's a claim about the physical world. I think it's true because humans are just machines built by biology, there's strong reason to think we can ultimately build similar machines, and the actual energy and capital cost of a machine to replace human labor would be well below human subsistence. This is all discussed a lot but hopefully not super controversial.

If you grant that, then humans may still pay other humans to do stuff, or may still use their political power to extract money that they give to laborers. But the actual marginal value of humans doing tasks in the physical world is really low.

in an economy with labor crowded out by capital, what does the poor class have to offer the capitalists that would provide the basis for a positive return on their investment

I don't understand this. I don't think you can get money with your hands or mind, the basis for a return on investment is that you own productive capital.

Also, this dystopia just comes about without any attempts to regulate the business environment in a way that makes the use of labor more attractive?

It's conceivable that we can make machines that are much better than humans, but that we make their use illegal. I'm betting against for a variety of reasons: jurisdictions that took this route would get badly outcompeted and so it would require strong global governance; it would be bad for human welfare and this fact would eventually become clear; it would disadvantage capitalists and other elites who have a lot of political power.

This seems to be an example of the "rounding to zero" fallacy.  Requiring fewer workers does not mean no workers.  Which, in turn, means that the problems you're concerned about have ALREADY happened with 10:1 or 100:1 efficiency improvements, and going to 1000:1 is just a continuation.

The other thing that's missing from your model is scarcity of capital.  The story of the last ~70 years is that "human capital" or "knowledge capital" is a thing, and it's not very easily owned exclusively - workers do in fact own (parts of) the means of production, in that the skills and knowledge are difficult to acquire but not really transferrable or accumulated in the same way as land or equipment.  

[-]dr_s107

I mean, isn't AI all precisely about accumulating knowledge capital back into the form of ownable, material stuff? It's in fact the main real source of complaint artists have about Dall-E, Midjourney etc. - it inferred their knowledge from their work and made it infinitely reproducible. GPTs do something like it with coding.

I understand your point about this being possibly just what you get if you make the scenario a little bit more extreme than it is, but I think this depends on what AGI is like. If AGI is human-ish in skills but still dumb or unreliable enough that it needs supervision, then sure, you're right. But if it actually lives up to the promise of being precisely as good as human workers, then what's left for humans to contribute? AGI that can't accumulate know-how isn't nearly good enough. AGI that can will do so in a matter of days.

isn't AI all precisely about accumulating knowledge capital back into the form of ownable, material stuff?

Umm, no?  Current AI seems much more about creating, distilling, and distributing knowledge capital in ways that are NOT exclusive or limited to the AI or it's controllers.

"Creating" is a big word. Right now I wouldn't say AI straight up creates knowledge. It creates new content, but the knowledge (e.g. the styles and techniques used in art) all come from its training data set. Essentially, the human capital you mention, which allows an artist to be paid for their work, or a programmer's skill, is what the AI captures from examples of that work and then is able to apply in different contexts. If your argument is "but knowledge workers have their own knowledge capital!", then AI is absolutely set to destroy that. In some cases this might have overall positive effects on society (e.g. AI medicine would likely be able to save more lives due to being cheaper and more available), but it's moot to argue this isn't about making those non-transferrable skills, in fact, transferrable (in the form of packaging them inside a tool that anyone can use).

And the AIs are definitely exclusive to their controllers mostly? Just because OpenAI puts its model on the internet for us to use doesn't mean the model is now ours. They have the source, they have the weights. It's theirs. Same goes for many others (not LLaMa, but no thanks to Meta). And I see the dangers in open sourced AI too, that's a different can of worms. But closed source absolutely means the owner retains control. If they offer it for free, you grow dependent on it, and then one day decide to make it paywalled, you'll have to pay. Because it's theirs, and always was.

The knowledge isn't the capital.  The ability to actually accomplish something with the knowledge is the actual value.  ChatGPT and other LLMs make existing knowledge far more accessible and digestible for many, allowing humans to apply that knowledge for their own use.

The model is proprietary and owned (though perhaps it's existence makes others cheaper to create), but the output, which is it's primary value, is available very cheaply.

The output is just that. The model allows you to create endless output. The knowledge needed to operate an art generator has nothing to do with art and is so basic and widespread a child can do it: just tell it what you want it to draw. There may be a few quirks to prompting but it's not something even remotely comparable to the complexity of actually making art by yourself. No matter how you look at it, the model is the "means of production" here. The prompter does roughly what a commissioner would, so the model replaces entirely the technical expertise of the artist.

[-]TAG10

We don't live in societies that are based entirely on capitalism in a way that would please Ayn Rand. We live in societies that are based on a mixture of capitalism, welfare, and democracy. Democracy means that no one's influence goes down to zero, even if their earning potential does. And welfare means here is already a precedent for supporting those who cannot support themselves.

I think ultimately the reason why we have democracy and welfare is because those sections of society had some leverage power. Look at what capitalism was in the 19th century and how it evolved especially in the early 20th through a lot of strikes, protests, and the general fear of communism, which produced both repression and appeasement. Democracy and welfare were compromises to keep society stable and productive. Consider some of the stuff that was done to protestors back in those times (we're talking episodes of army firing cannons on unarmed crowds of civilians), then consider how that would have gone if the ruling classes also had an infinitely loyal army of unbeatable killer robots and no need whatsoever for human workers.

[-]TAG10

I think ultimately the reason why we have democracy and welfare is because those sections of society had some leverage power. Look at what capitalism was in the 19th century and how it evolved especially in the early 20th through a lot of strikes, protests, and the general fear of communism,

And some of the elites being genuinely in favour of liberalism and socialism. It didn't happen without conflict, but it's not conflict all the way. The working poor sympathised with the non-working poor, the elderly and sick too -- they didn't get their own rights by staging their own strike.

Sure. Some butchers are benevolent. But that's usually not enough to turn the tide. Look at slavery in the US: plenty of people who disliked it even among the founding fathers, but in the end, it took industrialisation making it economically obsolete and a war to solve the issue.

[-]TAG10

I didn't say the non conflictual stuff was necessarily sufficient.