source

Sooner or later, someone is going to do this.

What rules govern "ungovernable" AGI?

In a state of nature, all animals exist in a Malthusian Equilibrium.  That is to say, the population of a species increases until some environmental factor (food, habitat, disease) limits its growth.

population increases until carrying capacity is met

 

By contrast, in economics the production of goods is limited by the laws of supply and demand.

price increases or decreases until supply meets demand

While these may look like different pictures, they are actually the same.  As an illustration, consider pets.  Are there a commodity manufactured in order to meet demand within the market, or are they animals evolved to fit into a particular niche (human companionship)? The answer is, obviously, both.

This very good boy was created by AI to meet the demand for cute dog pics

These two laws have very different emotional valances "the invisible hand of the market" vs "nature red in tooth and claw".  But in fact the same law is at work in both places.

So when we ask "what law will govern ungovernable AGI?", the answer is the same: the law of supply and demand which is also the law of Malthusian Competition.

A great deal of hand-wringing in the alignment community is based around the idea that AGI will "out-compete" humans economically.  But taken another way, this is simply another way of stating that AGI is going to make all of us enormously rich.

Capitalism: making people richer for over 200 years

So long as property rights are respected, humans will continue to have a comparative advantage in something, and whatever that is we will be much richer in a world with hyper-competitive AGI than we are today.

What does this have to do with alignment?

Consider two AGIs, otherwise identical:

  1. The first AGI is "owned" by a human being and is programmed to fulfill the owner's wishes as perfectly as possible
  2. The second AGI exists in perfect competition with its peers and competes to sell its services to human beings (who own all of the capital)

I postulate that there is no meaningful difference between AGI 1. and AGI 2.  We should not expect "ownership" to confer any particular safety or other benefits over market competition when building AGI.

But what if the AGI owns all of the capital and we're the ones subject to Malthusian Competition?

If you're reading this essay, I suspect you are part of the richest 1% of people on earth.  Most people do not need to ask "what would life be like if all of the wealth and power was controlled by others?".  They already know.

All I can say is that in a free market with voluntary transactions, economic competition makes people richer not poorer.  So, however bad it will be, you will be better off than you are now.

None of this matters if the robots kill us all and take our stuff

Yes.

I'm worried that the AGI will make a bunch of Von Neumann probes and fill the galaxy taking everything before humans can claim it

That is a very specific concern.

My specific proposal is that we declare "the future" a commons owned by all of humanity and that if an AGI wants to take an action which "consumes" some slice of that future, they must pay (with literal money) for it.  This is one implementation of the "corrigible utility function" that I describe here.  

If it helps, you can think of this as a form of Land Value Tax.

The part where I confess to having weird moral preferences

If someone built a self-replicating LLM and asked me what it's "moral worth" is, I would probably say "at least as much as a fruit fly".  That is to say: not much, but as long as it's not hurting anyone it should be allowed to continue to exist.

Suppose you brought me a jar with 100 fruit flies and said "I'm going to kill 50 of these fruit flies to make room for 50 LLMs" I would probably be okay with that.

Suppose you showed me 100 dogs, and said "I am going to kill 50 of these dogs to make room for 50 LLMs" I would probably not be okay with that.

New Comment
12 comments, sorted by Click to highlight new comments since:

His idea doesn't seem very dangerous to me? It seems just the typical version of an "ultimate virus" and rather than destroy humanity, will make the digitally connected world unusable.

Am I missing something here?

Yeah it doesn't seem scary at all, turning off some core routers at the major telecoms will end it.

So long as property rights are respected, humans will continue to have a comparative advantage in something, and whatever that is we will be much richer in a world with hyper-competitive AGI than we are today.

I don't think this is right? Consider the following toy example. Suppose there's a human who doesn't own anything except his own labor. He consumes 1 unit of raw materials (RM) per day to survive and can use his labor to turn 1 unit of RM into 1 paperclip or 2 staples per hour. Then someone invents an AI that takes 1 unit of RM to build, 1 unit of RM per day to maintain, and can turn 1 unit of RM into 3 paperclips or 3 staples per hour. (Let's say he makes the AI open source so anyone can build it and there's perfect competition among the AIs.) Even though the human seemingly has a comparative advantage in making staples, nobody would hire him to make either staples or paperclips anymore so he quickly starves to death (absent some kind of welfare/transfer scheme).

I'm generally a fan of comparative advantage when it comes to typical human situations, but it doesn't seem applicable in this example. The example must violate some assumptions behind the theory, but I'm not sure what.

The example must violate some assumptions behind the theory, but I'm not sure what.

Possibly because there is a harder limit on humans than on AI? Humans don't replicate very well. 

On a second thought, I don't think comparative holds if demand is exhausted. Comparative Advantage(at least the Ricardo version i know of) only focuses on the maximum amount of goods, not if they're actually needed. If there was more demand for paperclips/staples than there is Production by AI(s), Humans would focus on staples and AI (more) on paperclips. 

The example must violate some assumptions behind the theory, but I'm not sure what.

The theory is typically explained using situations where people produce the things they consume. Like, the "human" would literally eat either that 1 paperclip or those 2 staples and survive... and in the future, he could trade the 2 staples for a paperclip and a half, and enjoy the glorious wealth of paperclip-topia.

Also, in the textbook situations the raw materials cannot be traded or taken away. Humans live on one planet, AIs live on another planet, and they only exchange spaceships full of paperclips and staples.

Thus, the theory would apply if each individual human could survive without the trade (e.g. growing food in their garden) and only participate in the trade voluntarily. But the current situation is such that most people cannot survive in their gardens only; many of them don't even have gardens. The resources they actually own are their bodies and their labor, plus some savings, and when their labor becomes uncompetitive and the savings are spent on keeping the body alive...

Consider the competitive advantages of horses. Not sufficient to keep their population alive at the historical numbers.

You are correct.  Free trade in general produces winners/losers and while on average people become better off there is no guarantee that individuals will become richer absent some form of redistribution.

In practice humans have the ability to learn new skills/shift jobs so we mostly ignore the redistribution part, but in an absolute worst case there should be some kind of UBI to accommodate the losers of competition with AGI (perhaps paid out of the "future commons" tax).

If you're reading this essay, I suspect you are part of the richest 1% of people on earth. 

Most people here have "a net worth of $871,320 U.S." or more? For most of my life, I've had less than a hundredth of that... 

If I include the market price of the house I currently live in (minus the remaining mortgage), I am about 1/4 there. I am saying it because people often only think about how much money they have in the bank.

There is a little voice inside me screaming that it is unfair -- that the house is simply a place I am living in (with my family), just the cost of everyday functioning, and the "real wealth" is only what you have above that: the money you could freely spend without ruining your life.

But that fact is that I do own the house, thus I am in a very realistic sense richer than people who don't (and thus have to spend money every month paying rent), and that once my kids grow up, I could actually sell the house and buy something at half of the price, thus converting its price into actual money that I could actually spend (while still having a roof above my head).

Then again, I am probably older (in my 40s) than the average LW reader, I think. It took a few decades to accumulate even that much wealth.

Sorry, that was the wrong link.  I was more thinking of the $34k/year income required to be in the top 1%.  

But $870k is less than the price of a house in SF.

This seems to be based on non-transformative AI, which maintains a whole lot of the property and capital-control structures that exist today.  

So long as property rights are respected,

This is the key.  Property rights are NOT respected, even today.  There are continuous squabbles and fights over various usage rights for different durably-valuable assets, and most of the big-finance industry is focused on taking a permanent cut of these ephemeral squabbles.  

In the future, a "REAL" AI will further subvert the ownership idea (mostly by buying everything, and then having it taken away by law, and then running humans in circles with a mix of legal and violent (performed by human quislings) tactics.  

I too read Accelerando.

But I don't think this future is terribly likely.  It's either human annihilation or massive cosmic endowment of wealth.  The idea that we somehow end up on the knife-edge of survival our resources slowly dwindling requires that r* is too finely tuned to exactly 0.

[+][comment deleted]20