When discussing advanced AI, sometimes the following exchanges happens:

“Perhaps advanced AI won’t kill us. Perhaps it will trade with us”

“We don’t trade with ants”

I think it’s interesting to get clear on exactly why we don’t trade with ants, and whether it is relevant to the AI situation.

When a person says “we don’t trade with ants”, I think the implicit explanation is that humans are so big, powerful and smart compared to ants that we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?

I think this is broadly wrong, and that it is also an interesting case of the classic cognitive error of imagining that trade is about swapping fixed-value objects, rather than creating new value from a confluence of one’s needs and the other’s affordances. It’s only in the imaginary zero-sum world that you can generally replace trade with stealing the other party’s stuff, if the other party is weak enough.

Ants, with their skills, could do a lot that we would plausibly find worth paying for. Some ideas:

  1. Cleaning things that are hard for humans to reach (crevices, buildup in pipes, outsides of tall buildings)
  2. Chasing away other insects, including in agriculture
  3. Surveillance and spying
  4. Building, sculpting, moving, and mending things in hard to reach places and at small scales (e.g. dig tunnels, deliver adhesives to cracks)
  5. Getting out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)
  6. (For an extended list, see ‘Appendix: potentially valuable things things ants can do’)

We can’t take almost any of this by force, we can at best kill them and take their dirt and the minuscule mouthfuls of our foods they were eating.

Could we pay them for all this?

A single ant eats about 2mg per day according to a random website, so you could support a colony of a million ants with 2kg of food per day. Supposing they accepted pay in sugar, or something similarly expensive, 2kg costs around $3. Perhaps you would need to pay them more than subsistence to attract them away from foraging freely, since apparently food-gathering ants usually collect more than they eat, to support others in their colony. So let’s guess $5.

My guess is that a million ants could do well over $5 of the above labors in a day. For instance, a colony of meat ants takes ‘weeks’ to remove the meat from an entire carcass of an animal. Supposing somewhat conservatively that this is three weeks, and the animal is a 1.5kg bandicoot, the colony is moving 70g/day. Guesstimating the mass of crumbs falling on the floor of a small cafeteria in a day, I imagine that it’s less than that produced by tearing up a single bread roll and spreading it around, which the internet says is about 50g. So my guess is that an ant colony could clean the floor of a small cafeteria for around $5/day, which I imagine is cheaper than human sweeping (this site says ‘light cleaning’ costs around $35/h on average in the US). And this is one of the tasks where the ants have least advantages over humans. Cleaning the outside of skyscrapers or the inside of pipes is presumably much harder for humans than cleaning a cafeteria floor, and I expect is fairly similar for ants.

So at a basic level, it seems like there should be potential for trade with ants - they can do a lot of things that we want done, and could live well at the prices we would pay for those tasks being done.

So why don’t we trade with ants?

I claim that we don’t trade with ants because we can’t communicate with them. We can’t tell them what we’d like them to do, and can’t have them recognize that we would pay them if they did it. Which might be more than the language barrier. There might be a conceptual poverty. There might also be a lack of the memory and consistent identity that allows an ant to uphold commitments it made with me five minutes ago.

To get basic trade going, you might not need much of these things though. If we could only communicate that their all leaving our house immediately would prompt us to put a plate of honey in the garden for them and/or not slaughter them, then we would already be gaining from trade.

So it looks like the the AI-human relationship is importantly disanalogous to the human-ant relationship, because the big reason we don’t trade with ants will not apply to AI systems potentially trading with us: we can’t communicate with ants, AI can communicate with us.

(You might think ‘but the AI will be so far above us that it will think of itself as unable to communicate with us, in the same way that we can’t with the ants - we will be unable to conceive of most of its concepts’. It seems unlikely to me that one needs anything like the full palette of concepts available to the smarter creature to make productive trade. With ants, ‘go over there and we won’t kill you’ would do a lot, and it doesn’t involve concepts at the foggy pinnacle of human meaning-construction. The issue with ants is that we can’t communicate almost at all.)

But also: ants can actually do heaps of things we can’t, whereas (arguably) at some point that won’t be true for us relative to AI systems. (When we get human-level AI, will that AI also be ant level? Or will AI want to trade with ants for longer than it wants to trade with us? It can probably better figure out how to talk to ants.) However just because at some point AI systems will probably do everything humans do, doesn’t mean that this will happen on any particular timeline, e.g. the same one on which AI becomes ‘very powerful’. If the situation turns out similar to us and ants, we might expect that we continue to have a bunch of niche uses for a while.

In sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way. Is this what AI will be like? No. AI will be able to communicate with us, though at some point we will be less useful to AI systems than ants could be to us if they could communicate.

But, you might argue, being totally unable to communicate makes one useless, even if one has skills that could be good if accessible through communication. So being unable to communicate is just a kind of being useless, and how we treat ants is an apt case study in treatment of powerless and useless creatures, even if the uselessness has an unusual cause. This seems sort of right, but a) being unable to communicate probably makes a creature more absolutely useless than if it just lacks skills, because even an unskilled creature is sometimes in a position to add value e.g. by moving out of the way instead of having to be killed, b) the corner-ness of the case of ant uselessness might make general intuitive implications carry over poorly to other cases, c) the fact that the ant situation can definitely not apply to us relative to AIs seems interesting, and d) it just kind of worries me that when people are thinking about this analogy with ants, they are imagining it all wrong in the details, even if the conclusion should be the same.

Also, there’s a thought that AI being as much more powerful than us as we are than ants implies a uselessness that makes extermination almost guaranteed. But ants, while extremely powerless, are only useless to us by an accident of signaling systems. And we know that problem won’t apply in the case of AI. Perhaps we should not expect to so easily become useless to AI systems, even supposing they take all power from humans.

Appendix: potentially valuable things things ants can do

  1. Clean, especially small loose particles or detachable substances, especially in cases that are very hard for humans to reach (e.g. floors, crevices, sticky jars in the kitchen, buildup from pipes while water is off, the outsides of tall buildings)
  2. Chase away other insects
  3. Pest control in agriculture (they have already been used for this since about 400AD)
  4. Surveillance and spying
  5. Investigating hard to reach situations, underground or in walls for instance - e.g. see whether a pipe is leaking, or whether the foundation of a house is rotting, or whether there is smoke inside a wall
  6. Surveil buildings for smoke
  7. Defend areas from invaders, e.g. buildings, cars (some plants have coordinated with ants in this way)
  8. Sculpting/moving things at a very small scale
  9. Building house-size structures with intricate detailing.
  10. Digging tunnels (e.g. instead of digging up your garden to lay a pipe, maybe ants could dig the hole, then a flexible pipe could be pushed through it)
  11. Being used in medication (this already happens, but might happen better if we could communicate with them)
  12. Participating in war (attack, guerilla attack, sabotage, intelligence)
  13. Mending things at a small scale, e.g. delivering adhesive material to a crack in a pipe while the water is off
  14. Surveillance of scents (including which direction a scent is coming from), e.g. drugs, explosives, diseases, people, microbes
  15. Tending other small, useful organisms (‘Leafcutter ants (Atta and Acromyrmex) feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens.’Wikipedia: ‘Leaf cutter ants are sensitive enough to adapt to the fungi’s reaction to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is toxic to the fungus, the colony will no longer collect it…The fungi used by the higher attine ants no longer produce spores. These ants fully domesticated their fungal partner 15 million years ago, a process that took 30 million years to complete.[9] Their fungi produce nutritious and swollen hyphal tips (gongylidia) that grow in bundles called staphylae, to specifically feed the ants.’ ‘The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew.’ Wikipedia:’Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants’ nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them.’’)
  16. Measuring hard to access distances (they measure distance as they walk with an internal pedometer)
  17. Killing plants (lemon ants make ‘devil’s gardens’ by killing all plants other than ‘lemon ant trees’ in an area)
  18. Producing and delivering nitrogen to plants (‘Isotopic labelling studies suggest that plants also obtain nitrogen from the ants.’ - Wikipedia)
  19. Get out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)
New Comment
109 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]gwern16590

Humans can communicate with and productively use many animals (some not extinct*), some of whom even understand concepts like payment and exchange. (Animal psychology has advanced a lot since Adam Smith gave hostage to fortune by saying no one had ever seen a dog or other animal truck, barter, or exchange.) We don't 'trade' them with them. A few are fortunate enough to interest humans in preserving and even propagating them. We don't 'trade' with those either. At the end of the day, no matter how many millions her trainer earns, Lassie just gets a biscuit & ear scritches for being such a good girl. And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl.

I'd also highlight the lack of trade with many humans, as well as primates. (Consider the cost of crime and how easily one can create millions of dollars in externalities; consider the ever skyrocketing cost of maintaining research primates, especially the chimpanzees - there is nothing that a chimpanzee can do as a tradeable service which is worth >$20k/year and the costs of dealing with it being able to at any moment decide to rip off your face.)

* yet - growth mindset!

I would give my dog many treats to stop eating deer poop, since this behavior can lead to expensive veterinary visits. But I can't communicate with my dog well enough to set up this trade.

Why isn't this an example of "we would trade with animals if we could communicate better"?

1nim
The example of "don't eat that!" communication which comes immediately to mind is https://savethekiwi.nz/about-us/what-we-do/kiwi-avoidance-training-for-dogs/, though that's with negative rather than with positive reinforcement. The example of "do this other thing when you get that stimulus" communication which comes immediately to mind is https://www.akc.org/expert-advice/training/stop-dog-barking-doorbell/, which is a more direct trade between not doing the thing and getting a treat.

which is a more direct trade between not doing the thing and getting a treat.

Yeah, I've done similar trade-things with my cat. We certainly can trade with animals - we just very rarely do. Owning animals is like living in a Stalinist totalitarian communist dictatorship, in that there are sometimes nominally transactions involving 'rubles' and 'markets', but they represent a tiny fraction of the economy and are considered a last resort (and, animal activists would add, the treatment of animals resembles the less savory parts of such dictatorships as well, in both quality and quantity...).

0M. Y. Zuo
Is not providing treats to your dog already ‘communication’?
3JakubK
Sure, we have some rudimentary forms of dog-human communication. But there's plenty of room for improvement.
0M. Y. Zuo
This already counts as ‘trade with animals’ then.

If you count being literally owned by humans and subject to their every whim, with unowned animals or those that do anything harmful to humans or their other owned animals being routinely shot or poisoned as "trade with animals", then yes.

(I do think this would still count as a "win" in the scale of possible outcomes from unaligned AGI)

-1M. Y. Zuo
Your views on the nature of relationships between dog and owner does not reflect the actual situation in most cases.

It's not a view on the nature between dog and owner. It's a view on the relationship between the two species.

I'm not saying that owners routinely shoot the dogs, but that unowned dogs are routinely killed and that if an owned dog harms a human or other pets or livestock, it is common that other people will kill that dog.

Furthermore dogs have pretty much the best relationship with humans. Almost all of the many thousands of animal species have very much worse outcomes of interaction with humans, a substantial fraction of those including extinction.

1Noosphere89
I'm confused at why this is criticized, since this actually happens?
-7M. Y. Zuo
2JakubK
I don't see 'ability to trade with animals' as a binary variable. I think our ability to trade with animals could increase further even though it's not zero.
2Andy_McKenzie
I don't think it's accurate to claim that humans don't care about their pets' preferences as individuals and try to satisfy them.  To point out one reason that I think this, there are huge markets for pet welfare. There are even animal psychiatrists and there are longevity companies for pets.  I've also known many people who've been very distraught when their pets died. Cloning them would be a poor consolation.  I also don't think that 'trade' necessarily captures the right dynamic. I think it's more like communism in the sense that families are often communist. But I also don't think that your comment, which sidesteps this important aspect of human-animal relations, is the whole story.  Now, one could argue that the expansion of animal rights and caring about individual animals is a recent phenomenon, and that therefore these are merely dreamtime dynamics, but that requires a theory of dreamtime and why it will end. 
[-]gwern5135

I also don't think that 'trade' necessarily captures the right dynamic. I think it's more like communism in the sense that families are often communist. But I also don't think that your comment, which sidesteps this important aspect of human-animal relations, is the whole story.

Indeed, 'trade' is not the whole story; it is none of the story - my point is that the human-animal relations, by design, sidestep and exclude trade completely from their story.

Now, how good that actual story is for dogs, or more accurately for the AI/human analogy, wolves, one can certainly debate. (I'm sure you've seen the cartoons: "NOBLE WOLF: 'I'll just steal some food from over by that campfire, what's the worst that could happen?' [30,000 years later] [some extremely demeaning and entertaining photograph of spayed/neutered dog from an especially deformed, sickly, short-lived, inbred breed like English bulldogs]".) But that's an entirely different discussion from OP's claim that we humans totally would trade with ants if only we could communicate with them and that's the only barrier and thus renders it disanalogous to humans and AI.

(Incidentally, cloning a dead pet out of grief represents most of the consumer market for cat/dog cloning. Few do it to try to preserve a unique talent or for breeding purposes. The interviewed people usually say it was a good choice - although I don't know how many of the people dropping $20k+ on a cloned pet regret the choice, and don't talk to the media or write about it.)

6Andy_McKenzie
OK, I get your point now better, thanks for clarifying -- and I agree with it.  In our current society, even if dogs could talk, I bet that we wouldn't allow humans to trade (or at least anywhere close to "free" trade) with them, due to concerns for exploitation. 
2jmh
I agree with the view that trade with AI might not be a meaningful aspect related to dealing with risk or alignment -- though I suspect it will be part of the story. I think the story for dogs is that initially the trade struck with humans may well have been a pretty good one. They ended up with a much more competent pack, ate and slept better for it and didn't really lose any of their freedom or autonomy I suspect. Too long ago in the undocumented history to know but I don't think today is a good indication of the partnership and cooperative relationship (trade relationship) that was true for much of the time. I think that older setting is what one needs to consider in terms of any AI-human scenarios.
3Jeff Rose
That isn't very comforting.  To extend the analogy: there was a period when humans were relatively less powerful when they would trade with some other animals such as wolves/dogs.  Later, when humans became more powerful that stopped.     It is likely that the powers of AGI will increase relatively quickly, so even if you conclude there is a period when AGI will trade with humans that doesn't help us that much. 
3lc
But he didn't say that!
2Andy_McKenzie
I quoted "And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl." If genetic engineering a new animal would satisfy human goals, then this would imply that they don't care about their pet's preferences as individuals. 
4SarahSrinivasan
No, it wouldn't imply that, at all. One can very easily care about something's preference as an individual and work to make a new class of thing which will be more useful than the class of thing that individual belongs to.
1wslafleur
Your comment seems like a related aside, which I guess you admitted in a follow-up comment? But anyway, it makes me curious what the axiomatic precepts are for trade. The perception of mutual benefit and a shared ability to communicate this fact? Also OP doesn't clearly distinguish between broader forms of quid pro quo and trade, so I'm just sort of adopting the broadest possible definition I can imagine.
0George3d6
I think you're missing the whole point by handwaving the idea that "animals can understand reward and instruction" -- no they can't, and that's why we enslave and genetically engineer, not trade. Lassy would indeed be getting the big bucks were we able to communicate with her directly (and were she a wolf with desires beyond William-syndrome-induced pro-social obedience) Ultimately this gets back into a "hard" alignment problem, in so far as a system designed to "trade" with humans, i.e. break the communication barrier to understanding our goals and desires or at least be able to sign contracts upholding those... well, it's 0.0...01 from being aligned 
6gwern
Yes, they can, and quite sophisticatedly too - think examples like vampire bats engaging in long-term reciprocity in food exchanges, while paying attention to who welshes on requests and how much food they have to spare to barf up. But she's not. That's literally the point of breeding wolves into dogs. (And when we can't breed them, we tend to find something we can. Ask the Syrian wild ass how their uncooperativeness worked out for them once we found a better riding-animal substitute in the form of horses - oh, that's right, you can't, because we drove them frigging extinct.)
2George3d6
I mean between species, it seems reasonable to assume both we and the bat can't understand each other's values, even if we can understand those of our own species.

Trade with ant colonies would work iff:

  • We could cheaply communicate with ant colonies;
  • Ant colonies kept bargains;
  • We could find some useful class of tasks that ant colonies would do reliably (the ant colonies themselves being unlikely to figure out what they can do reliably);
  • And, most importantly:  We could not make a better technology that did what the ant colonies would do at a lower resource cost, including by such means as eg genetically engineering ant colonies that ate less and demanded a lower share of gains from trade.

The premise that fails and prevents superintelligences from being instrumentally incentivized to trade with humans as a matter of mere self-interest and efficiency is point 4.  Anything that can be done by a human can be done by a technology that uses less resources than a human.

The reason why it doesn't work to have an alternate Matrix movie in which the humans are paid to generate electrical power is not that the Matrix AIs can't talk to the humans, it's not that no humans will promise to pedal a generator bike if you pay them, it's not even that every kind of human gets bored and wanders away from the bike and flakes out on the job, it's that this is not the most efficient way to generate electrical power.

it seems like this does in fact have some hint of the problem. We need to take on the ant's self-valuation for ourselves; they're trying to survive, so we should gift them our self-preservation agency. They may not be the best to do the job at all times, but we should give them what would be a fair ratio of gains from trade if they had the bargaining power to demand it, because it could have been us who didn't. Seems like nailing decision theory is what solves this; it doesn't seem like we've quite nailed decision theory, but it seems to me that in fact getting decision theory right does mean we get to have nice things, and we have simply not done that to a deep learning standard yet.

Getting decision theory right seems to me that it would involve an explanation that is sufficient to get the AIs in the matrix, the ones that already existed and were misaligned but not enough to kill all humans, to suddenly want the humans to flourish - without having edited the ai in any other way than an explanation of some decision binding in language. It seems to me that it ought to involve an explanation that the majority of very wealthy humans would recognize as reason for why they should put up... (read more)

1Sempervivens
Agreed. In the human/AGI case, conditions 1 and 3 seem likely to hold (while I agree human self-report would be a bad way to learn what humans can do reliably, looking at the human track record is a solid way to identify useful classes of tasks at which humans are reasonably competent). I agree 4 more difficult to predict (and has been the subject of much of the discussion thus far), and this particular failure mode of genetically engineering more compliant / willing-to-accept-worse-trade ants/humans updates me towards thinking humans will have few useful services to offer, for the broad definition of humans. The most diligent/compliant/fearful 1% of the population might make good trade partners, but that remains a catastrophic outcome. I want to focus however a bit more on point 2, which seems less discussed.  When trades of the type "Getting out of our houses before we are driven to expend effort killing them" are on the table, some subset of humans (I'd guess 0.1-20% depending on the population) won't just fail to keep the bargain, they'll actively seek to sabotage trade and hurt whoever offered such a trade.  Ants don't recognize our property rights (we never 'earned' or traded for them, just claimed already-occupied territory, modified it to our will, and claimed we had the moral authority to exclude them), and it seems entirely possible AGI will claim property rights over large swathes of Earth, from which it may then seek to exclude us. Even if I could trade with ants because I could communicate well with them, I would not do so if I expected 1% of them would take the offering of trades like "leave or die" as the massive insult it is and thereby dedicate themselves to sabotaging my life (using their bodies to form shapes and images on my floors, chewing at electrical wires, or scattering themselves at low density in my bed to be a constant nuisance being some obvious examples ants with IQ 60 could achieve). Humans would do that, even against a foe they coul

I agree the ant analogy is flawed. But I don't think it's as flawed as you do.

  • In this scenario, the 'trade' we would make would plausibly be "do this stuff or we kill you", which is not amazing for the ants.
  • I think another disanalogy is that humans can't re-arrange ants to turn them into better trading partners (or just raw materials), but AI could do that to us. (h/t to Dustin Crummett for reminding me of this). And the fact that we might not be able to understand fancy AI concepts could make this option more appealing.
4JakubK
It costs money to kill ants with ant poison. If the ants would accept a cheaper amount of food to evacuate my house forever, I would take that trade. Similarly, it requires resources (compute, money, energy, etc) for an AGI to kill all humans or recursively improve. If the humans would accept a cheaper quantity of resources to help an AGI with its goals, the AGI might accept that trade?
7DanielFilan
If the ants believe the threat, you don't have to spend any money on actually poisoning the ants.
5JakubK
If the ants accept the trade "leave and I'll spare you," I don't have to spend any money on actually poisoning the ants. But I would consider the counteroffer "if you kill us, it will cost $20, and we're willing to leave for $1."

I think that if ants were smart enough to make that counter-offer, humans would probably regard them as smart enough to be blameworthy for invading the house in the first place, and the counter-offer would be rejected as extortion.

Analogy:  Imagine some humans from country A move into country B and start living there.  Country B says "we didn't give you permission to live in our country; leave or we'll kill you".  The humans say "killing us would cost you $20k; we'll leave if you pay us $1k."  How do you predict this negotiation ends?

Now, if we're talking about asking the ants to vacate an empty lot where they've lived for many years so that you can start building a new house there, then I could see humans paying the ants to leave.  (Though note that the ants may still lose more value by giving up their hive than the humans are willing to pay them to avoid the cost of exterminating them.)

5Donald Hobson
There are lots and lots of good reasons to recursively self improve. The point where you stop because of resources is a dyson sphere of quantum computronium.  I am not convinced that the resource cost of killing all humans is > the resource cost of 1 day's food.  "If the humans would accept a cheaper quantity of resources to help the AI with it's goals" The AI has goals that clearly oppose human wellbeing, and is offering us peanuts.  "It takes some resources." is I think not a great model at all. I think you are modeling the system as having resources that are in the AI's control or humans control. But the AI taking over may well have the structure of a computer exploit. A bunch of seeming coincidences that push the world into an increasingly strange state.  There is no sense of "this money/energy is controlled by humans, that is controlled by AI". The powerplant was built by humans. The LHC was built by humans. But the magnet control system was hacked, and a few people have been given subtle psycological nudges. In this model, how much resources does it cost to spoof a nuclear attack and trick the humans into a nuclear war? The large amount of damage done, the amount of uranium used or the tiny amount of compute used to form the plan? There is no "cost of resources" structure to this interaction.  
3Celarix
Ants are tiny and hard to find; they could plausibly take your money, defect, and keep eating for a long time before you found them again. Then you need to buy ant poison, anyway.
[-]Hoagy3511

Putting the entire failure to trade on the ability to communicate seems to understate the issue. Most if not all of the things listed that they 'could' do, are things which they could theoretically do with their physical capacities, but not with their cognitive abilities or ability to coordinate within themselves to accomplish a task.

In general, they aren't able to act with the level of intentionality required to be helpful to us except in cases where those things we want are almost exactly the things they have evolved to do (like bees making honey, as mentioned in another comment).

The 'failure to communicate' is therefore in fact a failure to be able to think and act at the required level of flexibility and abstraction, and that seems more likely to carry over to our relations with some theoretical, super advanced AI or civilisation.

2dust_to_must
Maybe one useful thought experiment is whether we could train a dog-level intelligence to do most of these tasks if it had the actuators of an ant colony, given our good understanding of dog training (~= "communication") and the fact that dogs still lack a bunch of key cognitive abilities humans have (so dog-human relations are somewhat analogous to human-AI relations).  (Also, ant colonies in aggregate do pretty complex things, so maybe they're not that far off from dogs? But I'm mostly just thinking of Douglas Hofstadter's "Aunt Hillary" here :) My guess is that for a lot of Katja's proposed trades, you'd only need the ants to have a moderate level of understanding, something like "dog level" or "pretty dumb AI system level". (e.g. "do thing X in situations where you get inputs Y that were associated with thing-we-actually-care-about Z during the training session we gave you".)  Definitely true that you're a more valuable trade partner if you're smarter. But there are some particularly useful intelligence/comms thresholds that we meet and ants don't -- e.g. the "dog level", plus some self-awareness stuff, plus not-awful world models in some domains. Meta: the dog analogy ignores the distinction between training and trading. I'm eliding this here bc it's hard to know what an ant colony's "considered opinion" / "reflective endorsement" would mean, let alone an ant's. but ofc this matters a lot for AGI-human interactions. Consider an AGi that keeps humans around on a "human preserve" out of sentiment, but only cares about certain features of humanity and genetically modifies others out of existence (analogous to training out certain behaviors or engaging in selective breeding), or tortures / brainwashes humans to get them to act the way it wants. (These failure modes of "having things an AI wants, and being able to give it those things, but not defend yourself" are also alluded to in other comments here, e.g. gwern and Elisabeth's comments about "the noble wolf"

The analogy fails for me because while "we don't trade with ants" is true, the very similar "we don't trade with bees" is not so true, for some definition of "trade" that seems at least somewhat appropriate.

[-]gwern3816

I don't think we trade with bees either. I would describe their situation as being worse, if anything, than that of domesticated wolves. Beekeepers keep bees which have been domesticated by centuries of selective breeding (up to and including artificial insemination), coerce bees into frames and transport them around involuntarily, manipulate them with smokers (or CO2), starve hives to keep them at manageable sizes which won't swarm, steal their honey at the end of summer and replace it with low-quality corn syrup, ruthlessly execute sick or uncooperative queens & hives, and cycle through hives as economically optimal for humans (perhaps why bee worker lifespan was recently reported to have halved over the past half-century).

“I don’t think anybody contests that free-living bees have a better, easier life,” Seeley told me. “What is contested is whether that’s realistic [economically].” --"Is Bee Keeping Wrong?"

3Radford Neal
We could debate how happy domesticated bees are, which no doubt varies from apiary to apiary, but I think it would be pointless for the purposes of this discussion.   I take the whole point of the "we don't trade with ants" comment to be that it shows that with such a huge difference of intelligence (or other capabilities) as there is between ants and humans, the ants are just totally irrelevant to human plans, or at most a minor annoyance to be squashed.  The implication being that the same will be true of humans and super-intelligent AI.  It's supposed to be a slam-dunk comment, showing how utterly silly any other view would be. But once you change "ants" to "bees", you can see that it's not at all a slam dunk analogy.  Bees are not irrelevant to human plans.  You have to get into exactly how the relationship works to decide how well the bees are doing in this relationship.  At that point, I think it's clear that reasoning using such an analogy is not really the best way to understand what the relationship between humans and a super-intelligent AI might be.
4jmh
I agree but would add that ants are actually doing just fine and human civilization has hardly been some existential threat to them.
[-]gwern3426

We actually do not know they are 'doing just fine'. Many insect species have gone extinct already (speaking of 'existential threats to them'...), and insect populations in general appear to be in substantial decline. It's highly debated because the data is in general so bad compared to bigger stuff like mammals. Anyway:

Bees have also been seriously affected, with only half of the bumblebee species found in Oklahoma in the US in 1949 being present in 2013. The number of honeybee colonies in the US was 6 million in 1947, but 3.5 million have been lost since.

There are more than 350,000 species of beetle and many are thought to have declined, especially dung beetles. But there are also big gaps in knowledge, with very little known about many flies, ants, aphids, shield bugs and crickets. Experts say there is no reason to think they are faring any better than the studied species.

A small number of adaptable species are increasing in number, but not nearly enough to outweigh the big losses. “There are always some species that take advantage of vacuum left by the extinction of other species,” said Sanchez-Bayo. In the US, the common eastern bumblebee is increasing due to its tolerance of

... (read more)
5jmh
I fully agree that we don't have great information on this. But I don't think the German example is a good one. I think its unfortunate but a lot of the biodiversity risk and extinction worries seem more smoke than light -- and I do think our environment is important and we should take actions when merited.  We have people tracking extinctions and ChatGTP seems to think the estimated for species extinctions is 500 - 5000 per annum ("According to estimates by the International Union for Conservation of Nature (IUCN), between 500 and 5,000 species are estimated to go extinct each year.") which it notes is probably a poor estimate and undershooting the true number. But it also puts the estimate of new species at about 18,000 or between 10,000 and 20,000 per annum. ("The number of new species discovered each year varies depending on the taxon, region, and the level of exploration and study. However, on average, between 10,000 and 20,000 new species are discovered each year. ... For example, according to estimates by the State University of New York, about 18,000 species of plants and animals are discovered each year, with about half of them being insects.") As related to ants specifically I will concede my comment was largely based on direct observation from my yard in the Northern Virginia area. The ant population seems to be relatively consistent for the past 30 years.  ChatGTP has this to say about ants: That was in response to the question are they going extinct but I think "Ants are one of the most successful and adaptable groups of insects on the planet" is probably a good sign that they are still quite successful in the face of human activity. Side notes on this. I came across an article in Wired about species thought extinct but found alive. One was a species of ants in South America (apparently wide spread at that) which had been thought extinct for 15 million years. Turned out that the ants we actually fine and just their behavior was one that probably ke
5gwern
But you see why 'sure, loads of existing animal species have and will go extinct due to humans, but that's ok from the god's eye POV because there's lots of new species being created or some other species can increase its numbers to occupy the now-vacant niches' is not comforting when we are discussing the prospects of an existing species (us) if we begin to be treated the way that we treat animals is not an argument for safety, right? It is an argument for danger: you can't even make the argument "at least it has some incentive to keep humans around to fill the niche" when that didn't save all the previous species who went extinct, because their niche was simply filled by an existing or new species. It does us no good if a successor AI civilization maintains the total amount of biomass roughly as it is but the winning species is cockroaches or dogs or chimpanzees or something (or some humans survive in a bunker somewhere, barely hanging on), which is the Outside View of what humans have done to other species thus far: wiped out large swathes, often quite arbitrarily (sometimes based literally on fashion trends), and replaced them, if at all, with some other species. If that happened again, as it has happened so many times so far, that still represents a near-total zeroing out of the value of the future for humans. And humans are what I care about, not hypothetical neo-cockroaches optimally adapted for living off datacenter heat vents.
2jmh
The question to me is just why the human species would be the one that goes extinct. It could happen, accidentally or intentionally. But why? Are we going to be competing in some niche with the new AI species? I don't quite see that. Would they change the environment in some way that is incompatible with humans, intentionally or just that's their pollution? Yes, maybe. Would the possible crowd us out of your habitat? That seems rather unlikely for two reasons. Humans can survive in a lot of different areas and have largely learned to modify their environment pretty well (clothes, shelter, heating, cooling, farming, ranching, material sciences). Second, as humans have become more informed (I won't say more intelligent) and knowledgeable it seems we start taking actions to prevent the harms we're doing. It's not quite fair to only point to the bad cases of human other species relationships and ignore the positive ones. The AI doing this to us, if it's smarter and better informed than humans might be expected to behave similarly. That does shift the issue to some extent to what type of morality and recognition value of life AIs might have. Maybe people have already thought through that issue and have a high confidence level that AI will be very amoral and uninterested in life as a value in and of itself. If that is not the case, and we can expect AIs to show some level of morality and respect for other life then one might expect that as various type of ties emerge and relationships form more consideration would be granted. A last note for consideration. I am not able to get a quick confirmation but my impression is that a fair amount of the species extinction is not really equivalent to all humans going extinct due to some AI. I'll use polar bears as the example. Global warming may well drive polar bears on to land and ultimately result in none remaining. But they are fully able to breed with other bears in one sense they will not completely gone extinct (a bit like
1M. Y. Zuo
Individual ants are irrelevant, but ‘ants’ as a collective whole are very relevant to human life on Earth. Certainly if literally every ant were to disappear tomorrow there will be very noticeable changes to the biosphere within a few decades.  The same seems to apply to bees.

Does what we do to factory farmed animals count as "trading" feed and shelter in exchange of meat, eggs and diary?

7the gears to ascension
yeah non-vegans definitely don't just "trade" with cows

Someone on Twitter mentioned slave owners similarly "not just trading" with slaves who could talk. I think it's a better analogy than factory farmed animals.

5avturchin
We also use ants for entertainment - selling ant farms for kids https://www.amazon.com/Nature-Gift-Store-Live-Shipped/dp/B00GVHEQV0 
2Nikola Smolenski
Bees are indeed better example than ants, since we know how bees communicate, and there has even been some research in making bee robots for communication with bees, so if these robots are perfected we could tell the bees to pollinate here and not there in accordance with our needs. So this seems like trade in that bees are getting information and we are getting pollination. Of course, trade is voluntary exchange of goods, and bees can not do anything voluntary, but humans can, so that is not actually the topic.

But also: ants can actually do heaps of things we can’t, whereas (arguably) at some point that won’t be true for us relative to AI systems.

Devil's advocate: by comparative advantage, even if the AI was strictly superior to humans at all tasks, it might still make sense for it to trade with humans.

By comparative advantage, the relevant threshold isn't "AI can do everything strictly better than a human"; it's "AI is able to kill the humans and use our matter-energy to build infrastructure that's more useful than humanity".

(Or "AI is able to kill the humans and the expected gain from trading with us is lower than the expected loss from us possibly shutting it down, building a rival AGI, etc.")

2nim
How do you describe the impulse that leads humans around the world to collect antiques, attempt to preserve endangered species, re-enact past time periods, etc? Whatever that is, there seems to be more of it in humans than in non-human animals. Why do you imagine we'd see less rather than more of it in something built to have more than we do of those traits which distinguish us from other creatures?
4Said Achmiz
Gwern’s “What Is The Collecting Mindset?” is relevant to your question.
6avturchin
Also, should be noted that the value of human atoms is very small: these atoms constitute around 10e-20 of all atoms in Solar system. Any small positive utility of human existence would overweight atom's usefulness. 
1dust_to_must
Yeah. It's conceivable you have an AI with some sentimental attachment to humans that leaves part of the universe as a "nature preserve" for humans. (Less analogous to our relationship with ants and more to charismatic flora and megafauna.)
2avturchin
I think that there is a small instrumental value in preserving humans. They could be exchange with Alien friendly AI, for example.

Get out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)

Ant mafia: "Lovely house you've got there, wouldn't it be a shame if it got all filled up with ants?"

I just want to note that I personally do in fact trade with ants. I really enjoy watching them carry a pile of sugar to their nest, so sometimes when I go for walks I bring a baggie of sugar, then I offer it to the ants and they carry it around for my entertainment. They don't know that's what's happening, but it works out the same: I give them something they want, they do something they wouldn't otherwise have done, and we both benefit.

After reading more comments, I suspect someone is going to come by to tell me that this is not "trade" somehow. I haven't decided whether I agree with them. Mostly I just wanted other people to know that this is a thing you can do to improve your walks if you think ants are cool.

3robm
I once came home to finds ants carrying rainbow sprinkles across my apartment wall (left out from cake making). I thought it was entertaining once I understood what I was seeing.

By the way, you do know that ants already do service for people by harvesting seeds for rooibos tea?

https://wildaboutants.com/tag/rooibos-seeds-harvested-by-ants/

Curated.

On one hand, I think I still disagree with the thrust of this post. I think the way we might trade with ants (or bees or dogs or horses, etc), is still just really different from what people typically have in mind when they're asking why AI might keep us alive, and the prospects discussed here are not reassuring to me. (And I have model-driven guesses of why superintelligences could build replacements for whatever humans are comparatively good at)

But, this post and the comments still prompted a lot of interesting thoughts. I appreciate posts that do a kind of "original seeing" on longstanding common arguments. I think I learned some things that are at least plausibly relevant to some kinds of AI takeoff here, and I also just learned or reconceptualized a lot of interesting stuff about how humans and animals interact.

I love the genre of "Katja takes an AI risk analogy way more seriously than other people and makes long lists of ways the analogous thing could work." (the previous post in the genre being the classic "Beyond fire alarms: freeing the groupstuck.")

Digging into the implications of this post: 

In sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way

... (read more)
9dust_to_must
In general, this post has prompted me to think more about the transition period between AI that's weaker than humans and stronger than all of human civilization, and that's been interesting! A lot of people assume that that takeoff will happen very quickly, but if it lasts for multiple years (or even decades) then the dynamics of that transition period could matter a lot, and trade is one aspect of that. some stray thoughts on what that transition period could look like: * Some doomy-feeling states don't immediately kill us. We might get an AI that's able to defeat humanity before it's able to cheaply replicate lots of human labor, because it gets a decisive strategic advantage via specialized skill in some random domain and can't easily skill itself up in other domains. * When would an AI prefer to trade rather than coerce or steal? * maybe if the transition period is slow, and it knows it's in the earlier part of the period, so reputation matters * maybe if it's being cleverly watched or trained by the org building it, since they want to avoid bad press  * maybe there's some core of values you can imprint that leads to this? but maybe actually being able to solve this issue is basically equivalent to solving alignment, in which case you might as well do that. * In a transition period, powerful human orgs would find various ways to interface with AI and vice versa, since they would be super useful tools / partners for each other. Even if the transition period is short, it might be long enough to change things, e.g. by getting the world's most powerful actors interested in building + using AI and not leaving it in the hands of a few AGI labs, by favoring labs that build especially good interfaces & especially valuable services, etc. (While in a world with a short take off rather than a long transition period, maybe big tech & governments don't recognize what's happening before ASI / doom.) 
[-]nim71

I think "trade" and "communication" are linked, and seem to exist on a spectrum that correlates to creatures' ability to predict the future. At the one extreme, we have gardeners who get plants to do what they wish by shaping the environment that the plants grow in. Near the middle, we have our interactions with domestic animals. At the other extreme, we have modern capitalism, where people exchange money for time spent on tasks they often wouldn't consider doing without the pay.

I suspect that where an interaction falls on that spectrum has a lot to do wit... (read more)

When a person says “we don’t trade with ants”, I think the implicit explanation is that humans are so big, powerful and smart compared to ants that we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?

 

I think this is an overly narrow definition of trading for this context. If an AGI wants something from humans it needs to leave us alive and happy enough to produce it. It might be nonconsensu... (read more)

3Celarix
No, I imagine an AGI would have many creative ways to force humans to do what it wants - directly pumping nutrients into your blood, removing neurotransmitters from your brain, overwriting your personality with an incorrigible desire to Do The Thing...

While I can imagine very simple forms of trade with human-level intelligent ants (e.g. you provide X units of wood, we will give you Y units of sugar), I do not expect a good outcome if I try to hire "army of ants" as an employee in my organization. I do not expect they would be able to join meetings, contribute points, understand other humans' illegible desires for a project, understand our vague preferences, etc. What I'm saying is I only think this works for very well-defined trades, and not for a lot of other trades.

0kjz
Maybe it's better to model the army of ants as a CRO you would hire instead of an employee? And by extension, I would much prefer to be part of an AGI's CRO than be extinct.
2Ben Pace
What is a CRO? Google tells me it's a crypto currency and a Certified Radio Operator, neither of which seem to fit. (Broadly I am against acronyms in line with this document.)
3kjz
Sorry, I thought that would be more commonly understood. As Carl said, it stands for Contract Research Organization. Hiring one is a way to get additional resources to perform specific tasks without having them be part of your organization, understand your corporate strategy, or even know what project you're working on. For example, a pharma company can hire a CRO to synthesize a specific set of potential drug compounds, without telling them what the biological target is or what disease they are trying to treat. Or think of the scenario where a rogue AGI hires someone to make a DNA sequence which turns out to code for a pathogen that kills all humans. This would likely be done at a CRO. CRO's are often thought of as being fairly competent at executing the specific task required of them, but less competent at thinking strategically, understanding the big picture, etc. So they are generally only hired for very well-defined trades, as you mentioned above.
1Carl Feynman
Contract Research Organization.  Basically an outfit you can hire to perform experiments for you.
1Program Den
I'm going to guess it's like mumble Resource Organization, something you'd like "farm out" some work to rather than have them on payroll and in meetings as it were.  Window Washers or Chimney Sweeps mayhap? Just a guess, and I hope I'm not training an Evil AI by answering this question with what sprang to mind from the context.

With ants, ‘go over there and we won’t kill you’ would do a lot, and it doesn’t involve concepts at the foggy pinnacle of human meaning-construction.

I agree that with a human-ant language, I could tell ants to leave my house. But then they'd probably come back in a week? I don't think ants can reason about the future. 

Likewise, humans might lack some concepts that are necessary for making meaningful trades with advanced AI agents.

the potentially enormous speed difference (https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps) will almost certainly be an effective communications barrier between humans and AI. there’s a wonderful scene of AIs vs humans negotiation in william hertling’s “A.I. apocalypse” that highlights this.

Anecdotal example of trade with ants (from a house in Bali, as described by David Abrams):

The daily gifts of rice kept the ant colonies occupied–and, presumably, satisfied. Placed in regular, repeated locations at the corners of various structures around the compound, the offerings seemed to establish certain boundaries between the human and ant communities; by honoring this boundary with gifts, the humans apparently hoped to persuade the insects to respect the boundary and not enter the buildings.

4gwern
Abrams, we should be clear, is not only reporting just his own speculation rather than any statement made by the Balinese (which itself may or may not indicate any trade successfully going on, which is rather dubious to begin with as feeding ants just makes more ants), he is, by his own account, making this up in direct contradiction to what his Bali hosts were telling him: And presuming to explain what they were 'really' trying to do.
1redbird
Yep, it's a funny example of trade, in that neither party is cognizant of the fact that they are trading!  I agree that Abrams could be wrong, but I don't take the story about "spirits" as much evidence: A ritual often has a stated purpose that sounds like nonsense, and yet the ritual persists because it confers some incidental benefit on the enactor.

When I tried to answer why we don't trade with ants myself, communication was one of the first things (I can't remember what was actually first) I considered. But I worry it may be more analogous to AI than argued here.

We sort of can communicate with ants. We know to some degree what makes them tick, it's just we mostly use that communication to lie to them and tell them this poison is actually really tasty. The issue may be less that communication is impossible, and more that it's too costly to figure out, and so no one tries to become Antman even if they... (read more)

Great post.

I don't think communicating trades is the only issue. Even if we could communicate with ants, e.g. "Please clean this cafeteria floor and we'll give you 5 kg of sugar" "Sure thing, human", I think there are still barriers.

  • Can the ants formulate a good plan for cleaning the floor?
  • Can the ants tell when the floor is clean enough?
  • Can the ants motivate their team?
  • Can the ants figure out where to deposit debris, and figure this out if a human janitor accidentally leaves the bin in a different place than yesterday

There's a lot to the task of cleaning ... (read more)

 Another problem is trust. In order for trade to work, the AI has to trust that the human will follow the deal. If 1% of humans decide to smash the robots instead, humans could be totally useless. Sure, there are some things ants could do, but if the ants sometimes caused a problem, they would be much less useful. 

Humans are the strongest source of potentially adversarial optimization. The cost of defending against an enemy is huge. Hiring someone who has even a 1% chance of actively trying to harm you is probably a bad move in expectation. ... (read more)

[-]kjz21

Should we humans broadcast more explicitly to future AGIs that we greatly prefer the future where we engage in mutually beneficial trade with them to the future where we are destroyed?

(I am making an assumption here that most, if not all, people would agree with this preference. It seems fairly overdetermined to me. But if I'm missing something where this could somehow lead to unintended consequences, please feel free to point that out.)

Humans are unlikely to be the most efficient configuration of matter to carry out any particular task the AI wants to get done - so if the power imbalance is sufficiently large the AI will be better off wiping us out to configure the matter in a more efficient way.

[-]Shmi2-3

Human brains are easily hackable (a good book does it, or a good brainwasher), we change our views easily given the right "argument", logical and/or emotional. Anything smarter than us can figure out a way to get us what it wants us to do without anything like a "trade", but because we think it's the right thing to do. If you doubt it, note that EA did a good job of convincing people to donate to charity. The military is an even more extreme example. The best con does not feel like a con at all. (Not saying that either of the two examples is a con, just th... (read more)

Ants that have intelligence anywhere near the level needed for meaningful trade with us would likely cause our extinction. We don't trade with ants mainly because they're too stupid to do anything we want, despite being physically capable of doing plenty of things we might want.

If you scale up their intelligence to the level where they're worthwhile trading with, there are stories you can tell about how beneficial trade would be between our species, but frankly I don't think that we would be of much benefit to them[1].

  1. ^

    Them! was a 1950's movie about giant

... (read more)

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

Size circumscribes – it has no room For petty furniture – The Giant tolerates no Gnat For Ease of Gianture –

Repudiates it, all the more – Because intrinsic size Ignores the possibility Of Calumnies – or Flies.

~ Dickinson

The following issue seems fundamental and related (though i am not sure how exactly :-) ): There is a difference between things ants could physically do and what they are smart enough to do / what we can cheaply enough explain to them. Similarly for humans: delegating takes work. For example, hiring an IQ 80 cleaner might only be worth it for routine tasks, not for "clean up after this large event and just tell me when it's done, bye". Similarly, for some reason I am not supervising 10 master students, even if they were all smarter than me.

End of the day, it's about power.

Sufficiently advanced AI could create bots to do the things they need done. We cannot create an ant equivalent bot (yet). Messengers on horses don't exist alongside the car, plane, or internet. Bots created by AI will likely fit the needs they have much more neatly and for lower maintenance costs than paying humans.

We are everyday finding new ways to automate human labor, from mental to physical to creative. Why would AI suddenly stop that effort in order to trade with us?

Good post, but there is a big disbalance in human-ants relationships. 

If people could communicate with ants, nothing would stop humans to make ants suffer if it made the deal better for humans because of a power disbalance. 

For example, domesticated chickens live in very crowded and stinky conditions, and their average lifespan is a month after which they are killed. Not a particularly good living conditions.

People just care about profitability do it just because they can.

Directionally agree, but: A) Short period of trade before we become utterly useless is not much comfort. B) Trade is a particular case of bootstrapping influence on what an agent value to influence on their behaviour. The other major way of doing that is blackmail - which is much more effective in many circumstances, and would have been far more common if the State didn't blackmail us to not blackmail each other, to honour contacts, etc.

BTW those two points are basically how many people afraid that capitalism (i.e. our trade with super human organisations)... (read more)

Assuming AI doesn't care about acting ethically, and even assuming AI can communicate and find useful things for us to do, there's no reason why AI wouldn't just manipulate and coerce humans rather than trading with them. 

Even in real-life we have plenty of examples of humans enslaving each other, when you get sci-fi possibilities like an AI just implanting mind control devices in human heads, then why would an AI waste resources and probably sacrifice efficiency just to trade evenly with humans?

Human can manipulate animals and make them do what they want. So could AI

AI is dependent on humans.  It gets power and data from humans and it cannot go on without humans.   We don't trade with it, we dictate terms. 

Do we fear a world where we have turned over mining, production and powering everything to the AI. Getting there would take a lot more than self amplifying feedback loop of a machine rewriting its own code. 

[+][comment deleted]21