Humans can communicate with and productively use many animals (some not extinct*), some of whom even understand concepts like payment and exchange. (Animal psychology has advanced a lot since Adam Smith gave hostage to fortune by saying no one had ever seen a dog or other animal truck, barter, or exchange.) We don't 'trade' them with them. A few are fortunate enough to interest humans in preserving and even propagating them. We don't 'trade' with those either. At the end of the day, no matter how many millions her trainer earns, Lassie just gets a biscuit & ear scritches for being such a good girl. And if she isn't a good girl, we genetically engineer and manufacture (ie. breed) an ex-wolf who is a good girl.
I'd also highlight the lack of trade with many humans, as well as primates. (Consider the cost of crime and how easily one can create millions of dollars in externalities; consider the ever skyrocketing cost of maintaining research primates, especially the chimpanzees - there is nothing that a chimpanzee can do as a tradeable service which is worth >$20k/year and the costs of dealing with it being able to at any moment decide to rip off your face.)
* yet - growth mindset!
I would give my dog many treats to stop eating deer poop, since this behavior can lead to expensive veterinary visits. But I can't communicate with my dog well enough to set up this trade.
Why isn't this an example of "we would trade with animals if we could communicate better"?
which is a more direct trade between not doing the thing and getting a treat.
Yeah, I've done similar trade-things with my cat. We certainly can trade with animals - we just very rarely do. Owning animals is like living in a Stalinist totalitarian communist dictatorship, in that there are sometimes nominally transactions involving 'rubles' and 'markets', but they represent a tiny fraction of the economy and are considered a last resort (and, animal activists would add, the treatment of animals resembles the less savory parts of such dictatorships as well, in both quality and quantity...).
If you count being literally owned by humans and subject to their every whim, with unowned animals or those that do anything harmful to humans or their other owned animals being routinely shot or poisoned as "trade with animals", then yes.
(I do think this would still count as a "win" in the scale of possible outcomes from unaligned AGI)
It's not a view on the nature between dog and owner. It's a view on the relationship between the two species.
I'm not saying that owners routinely shoot the dogs, but that unowned dogs are routinely killed and that if an owned dog harms a human or other pets or livestock, it is common that other people will kill that dog.
Furthermore dogs have pretty much the best relationship with humans. Almost all of the many thousands of animal species have very much worse outcomes of interaction with humans, a substantial fraction of those including extinction.
I also don't think that 'trade' necessarily captures the right dynamic. I think it's more like communism in the sense that families are often communist. But I also don't think that your comment, which sidesteps this important aspect of human-animal relations, is the whole story.
Indeed, 'trade' is not the whole story; it is none of the story - my point is that the human-animal relations, by design, sidestep and exclude trade completely from their story.
Now, how good that actual story is for dogs, or more accurately for the AI/human analogy, wolves, one can certainly debate. (I'm sure you've seen the cartoons: "NOBLE WOLF
: 'I'll just steal some food from over by that campfire, what's the worst that could happen?' [30,000 years later] [some extremely demeaning and entertaining photograph of spayed/neutered dog from an especially deformed, sickly, short-lived, inbred breed like English bulldogs]".) But that's an entirely different discussion from OP's claim that we humans totally would trade with ants if only we could communicate with them and that's the only barrier and thus renders it disanalogous to humans and AI.
(Incidentally, cloning a dead pet out of grief represents most of the consumer market for cat/dog cloning. Few do it to try to preserve a unique talent or for breeding purposes. The interviewed people usually say it was a good choice - although I don't know how many of the people dropping $20k+ on a cloned pet regret the choice, and don't talk to the media or write about it.)
Trade with ant colonies would work iff:
The premise that fails and prevents superintelligences from being instrumentally incentivized to trade with humans as a matter of mere self-interest and efficiency is point 4. Anything that can be done by a human can be done by a technology that uses less resources than a human.
The reason why it doesn't work to have an alternate Matrix movie in which the humans are paid to generate electrical power is not that the Matrix AIs can't talk to the humans, it's not that no humans will promise to pedal a generator bike if you pay them, it's not even that every kind of human gets bored and wanders away from the bike and flakes out on the job, it's that this is not the most efficient way to generate electrical power.
it seems like this does in fact have some hint of the problem. We need to take on the ant's self-valuation for ourselves; they're trying to survive, so we should gift them our self-preservation agency. They may not be the best to do the job at all times, but we should give them what would be a fair ratio of gains from trade if they had the bargaining power to demand it, because it could have been us who didn't. Seems like nailing decision theory is what solves this; it doesn't seem like we've quite nailed decision theory, but it seems to me that in fact getting decision theory right does mean we get to have nice things, and we have simply not done that to a deep learning standard yet.
Getting decision theory right seems to me that it would involve an explanation that is sufficient to get the AIs in the matrix, the ones that already existed and were misaligned but not enough to kill all humans, to suddenly want the humans to flourish - without having edited the ai in any other way than an explanation of some decision binding in language. It seems to me that it ought to involve an explanation that the majority of very wealthy humans would recognize as reason for why they should put up...
I agree the ant analogy is flawed. But I don't think it's as flawed as you do.
I think that if ants were smart enough to make that counter-offer, humans would probably regard them as smart enough to be blameworthy for invading the house in the first place, and the counter-offer would be rejected as extortion.
Analogy: Imagine some humans from country A move into country B and start living there. Country B says "we didn't give you permission to live in our country; leave or we'll kill you". The humans say "killing us would cost you $20k; we'll leave if you pay us $1k." How do you predict this negotiation ends?
Now, if we're talking about asking the ants to vacate an empty lot where they've lived for many years so that you can start building a new house there, then I could see humans paying the ants to leave. (Though note that the ants may still lose more value by giving up their hive than the humans are willing to pay them to avoid the cost of exterminating them.)
Putting the entire failure to trade on the ability to communicate seems to understate the issue. Most if not all of the things listed that they 'could' do, are things which they could theoretically do with their physical capacities, but not with their cognitive abilities or ability to coordinate within themselves to accomplish a task.
In general, they aren't able to act with the level of intentionality required to be helpful to us except in cases where those things we want are almost exactly the things they have evolved to do (like bees making honey, as mentioned in another comment).
The 'failure to communicate' is therefore in fact a failure to be able to think and act at the required level of flexibility and abstraction, and that seems more likely to carry over to our relations with some theoretical, super advanced AI or civilisation.
The analogy fails for me because while "we don't trade with ants" is true, the very similar "we don't trade with bees" is not so true, for some definition of "trade" that seems at least somewhat appropriate.
I don't think we trade with bees either. I would describe their situation as being worse, if anything, than that of domesticated wolves. Beekeepers keep bees which have been domesticated by centuries of selective breeding (up to and including artificial insemination), coerce bees into frames and transport them around involuntarily, manipulate them with smokers (or CO2), starve hives to keep them at manageable sizes which won't swarm, steal their honey at the end of summer and replace it with low-quality corn syrup, ruthlessly execute sick or uncooperative queens & hives, and cycle through hives as economically optimal for humans (perhaps why bee worker lifespan was recently reported to have halved over the past half-century).
“I don’t think anybody contests that free-living bees have a better, easier life,” Seeley told me. “What is contested is whether that’s realistic [economically].” --"Is Bee Keeping Wrong?"
We actually do not know they are 'doing just fine'. Many insect species have gone extinct already (speaking of 'existential threats to them'...), and insect populations in general appear to be in substantial decline. It's highly debated because the data is in general so bad compared to bigger stuff like mammals. Anyway:
...Bees have also been seriously affected, with only half of the bumblebee species found in Oklahoma in the US in 1949 being present in 2013. The number of honeybee colonies in the US was 6 million in 1947, but 3.5 million have been lost since.
There are more than 350,000 species of beetle and many are thought to have declined, especially dung beetles. But there are also big gaps in knowledge, with very little known about many flies, ants, aphids, shield bugs and crickets. Experts say there is no reason to think they are faring any better than the studied species.
A small number of adaptable species are increasing in number, but not nearly enough to outweigh the big losses. “There are always some species that take advantage of vacuum left by the extinction of other species,” said Sanchez-Bayo. In the US, the common eastern bumblebee is increasing due to its tolerance of
Does what we do to factory farmed animals count as "trading" feed and shelter in exchange of meat, eggs and diary?
Someone on Twitter mentioned slave owners similarly "not just trading" with slaves who could talk. I think it's a better analogy than factory farmed animals.
But also: ants can actually do heaps of things we can’t, whereas (arguably) at some point that won’t be true for us relative to AI systems.
Devil's advocate: by comparative advantage, even if the AI was strictly superior to humans at all tasks, it might still make sense for it to trade with humans.
By comparative advantage, the relevant threshold isn't "AI can do everything strictly better than a human"; it's "AI is able to kill the humans and use our matter-energy to build infrastructure that's more useful than humanity".
(Or "AI is able to kill the humans and the expected gain from trading with us is lower than the expected loss from us possibly shutting it down, building a rival AGI, etc.")
Get out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)
Ant mafia: "Lovely house you've got there, wouldn't it be a shame if it got all filled up with ants?"
I just want to note that I personally do in fact trade with ants. I really enjoy watching them carry a pile of sugar to their nest, so sometimes when I go for walks I bring a baggie of sugar, then I offer it to the ants and they carry it around for my entertainment. They don't know that's what's happening, but it works out the same: I give them something they want, they do something they wouldn't otherwise have done, and we both benefit.
After reading more comments, I suspect someone is going to come by to tell me that this is not "trade" somehow. I haven't decided whether I agree with them. Mostly I just wanted other people to know that this is a thing you can do to improve your walks if you think ants are cool.
By the way, you do know that ants already do service for people by harvesting seeds for rooibos tea?
https://wildaboutants.com/tag/rooibos-seeds-harvested-by-ants/
Curated.
On one hand, I think I still disagree with the thrust of this post. I think the way we might trade with ants (or bees or dogs or horses, etc), is still just really different from what people typically have in mind when they're asking why AI might keep us alive, and the prospects discussed here are not reassuring to me. (And I have model-driven guesses of why superintelligences could build replacements for whatever humans are comparatively good at)
But, this post and the comments still prompted a lot of interesting thoughts. I appreciate posts that do a kind of "original seeing" on longstanding common arguments. I think I learned some things that are at least plausibly relevant to some kinds of AI takeoff here, and I also just learned or reconceptualized a lot of interesting stuff about how humans and animals interact.
I love the genre of "Katja takes an AI risk analogy way more seriously than other people and makes long lists of ways the analogous thing could work." (the previous post in the genre being the classic "Beyond fire alarms: freeing the groupstuck.")
Digging into the implications of this post:
...In sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way
I think "trade" and "communication" are linked, and seem to exist on a spectrum that correlates to creatures' ability to predict the future. At the one extreme, we have gardeners who get plants to do what they wish by shaping the environment that the plants grow in. Near the middle, we have our interactions with domestic animals. At the other extreme, we have modern capitalism, where people exchange money for time spent on tasks they often wouldn't consider doing without the pay.
I suspect that where an interaction falls on that spectrum has a lot to do wit...
When a person says “we don’t trade with ants”, I think the implicit explanation is that humans are so big, powerful and smart compared to ants that we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?
I think this is an overly narrow definition of trading for this context. If an AGI wants something from humans it needs to leave us alive and happy enough to produce it. It might be nonconsensu...
Much like "Let's think about slowing down AI" (Also by KatjaGrace, ranked #4 from 2022), this post finds a seemly "obviously wrong" idea and takes it completely seriously on its own terms. I worry that this post won't get as much love, because the conclusions don't feel as obvious in hindsight, and the topic is much more whimsical.
I personally find these posts extremely refreshing, and they inspire me to try to question my own assumptions/reasoning more deeply. I really hope to see more posts like this.
While I can imagine very simple forms of trade with human-level intelligent ants (e.g. you provide X units of wood, we will give you Y units of sugar), I do not expect a good outcome if I try to hire "army of ants" as an employee in my organization. I do not expect they would be able to join meetings, contribute points, understand other humans' illegible desires for a project, understand our vague preferences, etc. What I'm saying is I only think this works for very well-defined trades, and not for a lot of other trades.
With ants, ‘go over there and we won’t kill you’ would do a lot, and it doesn’t involve concepts at the foggy pinnacle of human meaning-construction.
I agree that with a human-ant language, I could tell ants to leave my house. But then they'd probably come back in a week? I don't think ants can reason about the future.
Likewise, humans might lack some concepts that are necessary for making meaningful trades with advanced AI agents.
the potentially enormous speed difference (https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps) will almost certainly be an effective communications barrier between humans and AI. there’s a wonderful scene of AIs vs humans negotiation in william hertling’s “A.I. apocalypse” that highlights this.
Anecdotal example of trade with ants (from a house in Bali, as described by David Abrams):
The daily gifts of rice kept the ant colonies occupied–and, presumably, satisfied. Placed in regular, repeated locations at the corners of various structures around the compound, the offerings seemed to establish certain boundaries between the human and ant communities; by honoring this boundary with gifts, the humans apparently hoped to persuade the insects to respect the boundary and not enter the buildings.
When I tried to answer why we don't trade with ants myself, communication was one of the first things (I can't remember what was actually first) I considered. But I worry it may be more analogous to AI than argued here.
We sort of can communicate with ants. We know to some degree what makes them tick, it's just we mostly use that communication to lie to them and tell them this poison is actually really tasty. The issue may be less that communication is impossible, and more that it's too costly to figure out, and so no one tries to become Antman even if they...
Great post.
I don't think communicating trades is the only issue. Even if we could communicate with ants, e.g. "Please clean this cafeteria floor and we'll give you 5 kg of sugar" "Sure thing, human", I think there are still barriers.
There's a lot to the task of cleaning ...
Another problem is trust. In order for trade to work, the AI has to trust that the human will follow the deal. If 1% of humans decide to smash the robots instead, humans could be totally useless. Sure, there are some things ants could do, but if the ants sometimes caused a problem, they would be much less useful.
Humans are the strongest source of potentially adversarial optimization. The cost of defending against an enemy is huge. Hiring someone who has even a 1% chance of actively trying to harm you is probably a bad move in expectation. ...
Should we humans broadcast more explicitly to future AGIs that we greatly prefer the future where we engage in mutually beneficial trade with them to the future where we are destroyed?
(I am making an assumption here that most, if not all, people would agree with this preference. It seems fairly overdetermined to me. But if I'm missing something where this could somehow lead to unintended consequences, please feel free to point that out.)
Humans are unlikely to be the most efficient configuration of matter to carry out any particular task the AI wants to get done - so if the power imbalance is sufficiently large the AI will be better off wiping us out to configure the matter in a more efficient way.
Human brains are easily hackable (a good book does it, or a good brainwasher), we change our views easily given the right "argument", logical and/or emotional. Anything smarter than us can figure out a way to get us what it wants us to do without anything like a "trade", but because we think it's the right thing to do. If you doubt it, note that EA did a good job of convincing people to donate to charity. The military is an even more extreme example. The best con does not feel like a con at all. (Not saying that either of the two examples is a con, just th...
Ants that have intelligence anywhere near the level needed for meaningful trade with us would likely cause our extinction. We don't trade with ants mainly because they're too stupid to do anything we want, despite being physically capable of doing plenty of things we might want.
If you scale up their intelligence to the level where they're worthwhile trading with, there are stories you can tell about how beneficial trade would be between our species, but frankly I don't think that we would be of much benefit to them[1].
Them! was a 1950's movie about giant
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
Size circumscribes – it has no room For petty furniture – The Giant tolerates no Gnat For Ease of Gianture –
Repudiates it, all the more – Because intrinsic size Ignores the possibility Of Calumnies – or Flies.
~ Dickinson
The following issue seems fundamental and related (though i am not sure how exactly :-) ): There is a difference between things ants could physically do and what they are smart enough to do / what we can cheaply enough explain to them. Similarly for humans: delegating takes work. For example, hiring an IQ 80 cleaner might only be worth it for routine tasks, not for "clean up after this large event and just tell me when it's done, bye". Similarly, for some reason I am not supervising 10 master students, even if they were all smarter than me.
Sufficiently advanced AI could create bots to do the things they need done. We cannot create an ant equivalent bot (yet). Messengers on horses don't exist alongside the car, plane, or internet. Bots created by AI will likely fit the needs they have much more neatly and for lower maintenance costs than paying humans.
We are everyday finding new ways to automate human labor, from mental to physical to creative. Why would AI suddenly stop that effort in order to trade with us?
Good post, but there is a big disbalance in human-ants relationships.
If people could communicate with ants, nothing would stop humans to make ants suffer if it made the deal better for humans because of a power disbalance.
For example, domesticated chickens live in very crowded and stinky conditions, and their average lifespan is a month after which they are killed. Not a particularly good living conditions.
People just care about profitability do it just because they can.
Directionally agree, but: A) Short period of trade before we become utterly useless is not much comfort. B) Trade is a particular case of bootstrapping influence on what an agent value to influence on their behaviour. The other major way of doing that is blackmail - which is much more effective in many circumstances, and would have been far more common if the State didn't blackmail us to not blackmail each other, to honour contacts, etc.
BTW those two points are basically how many people afraid that capitalism (i.e. our trade with super human organisations)...
Assuming AI doesn't care about acting ethically, and even assuming AI can communicate and find useful things for us to do, there's no reason why AI wouldn't just manipulate and coerce humans rather than trading with them.
Even in real-life we have plenty of examples of humans enslaving each other, when you get sci-fi possibilities like an AI just implanting mind control devices in human heads, then why would an AI waste resources and probably sacrifice efficiency just to trade evenly with humans?
AI is dependent on humans. It gets power and data from humans and it cannot go on without humans. We don't trade with it, we dictate terms.
Do we fear a world where we have turned over mining, production and powering everything to the AI. Getting there would take a lot more than self amplifying feedback loop of a machine rewriting its own code.
When discussing advanced AI, sometimes the following exchanges happens:
“Perhaps advanced AI won’t kill us. Perhaps it will trade with us”
“We don’t trade with ants”
I think it’s interesting to get clear on exactly why we don’t trade with ants, and whether it is relevant to the AI situation.
When a person says “we don’t trade with ants”, I think the implicit explanation is that humans are so big, powerful and smart compared to ants that we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?
I think this is broadly wrong, and that it is also an interesting case of the classic cognitive error of imagining that trade is about swapping fixed-value objects, rather than creating new value from a confluence of one’s needs and the other’s affordances. It’s only in the imaginary zero-sum world that you can generally replace trade with stealing the other party’s stuff, if the other party is weak enough.
Ants, with their skills, could do a lot that we would plausibly find worth paying for. Some ideas:
We can’t take almost any of this by force, we can at best kill them and take their dirt and the minuscule mouthfuls of our foods they were eating.
Could we pay them for all this?
A single ant eats about 2mg per day according to a random website, so you could support a colony of a million ants with 2kg of food per day. Supposing they accepted pay in sugar, or something similarly expensive, 2kg costs around $3. Perhaps you would need to pay them more than subsistence to attract them away from foraging freely, since apparently food-gathering ants usually collect more than they eat, to support others in their colony. So let’s guess $5.
My guess is that a million ants could do well over $5 of the above labors in a day. For instance, a colony of meat ants takes ‘weeks’ to remove the meat from an entire carcass of an animal. Supposing somewhat conservatively that this is three weeks, and the animal is a 1.5kg bandicoot, the colony is moving 70g/day. Guesstimating the mass of crumbs falling on the floor of a small cafeteria in a day, I imagine that it’s less than that produced by tearing up a single bread roll and spreading it around, which the internet says is about 50g. So my guess is that an ant colony could clean the floor of a small cafeteria for around $5/day, which I imagine is cheaper than human sweeping (this site says ‘light cleaning’ costs around $35/h on average in the US). And this is one of the tasks where the ants have least advantages over humans. Cleaning the outside of skyscrapers or the inside of pipes is presumably much harder for humans than cleaning a cafeteria floor, and I expect is fairly similar for ants.
So at a basic level, it seems like there should be potential for trade with ants - they can do a lot of things that we want done, and could live well at the prices we would pay for those tasks being done.
So why don’t we trade with ants?
I claim that we don’t trade with ants because we can’t communicate with them. We can’t tell them what we’d like them to do, and can’t have them recognize that we would pay them if they did it. Which might be more than the language barrier. There might be a conceptual poverty. There might also be a lack of the memory and consistent identity that allows an ant to uphold commitments it made with me five minutes ago.
To get basic trade going, you might not need much of these things though. If we could only communicate that their all leaving our house immediately would prompt us to put a plate of honey in the garden for them and/or not slaughter them, then we would already be gaining from trade.
So it looks like the the AI-human relationship is importantly disanalogous to the human-ant relationship, because the big reason we don’t trade with ants will not apply to AI systems potentially trading with us: we can’t communicate with ants, AI can communicate with us.
(You might think ‘but the AI will be so far above us that it will think of itself as unable to communicate with us, in the same way that we can’t with the ants - we will be unable to conceive of most of its concepts’. It seems unlikely to me that one needs anything like the full palette of concepts available to the smarter creature to make productive trade. With ants, ‘go over there and we won’t kill you’ would do a lot, and it doesn’t involve concepts at the foggy pinnacle of human meaning-construction. The issue with ants is that we can’t communicate almost at all.)
But also: ants can actually do heaps of things we can’t, whereas (arguably) at some point that won’t be true for us relative to AI systems. (When we get human-level AI, will that AI also be ant level? Or will AI want to trade with ants for longer than it wants to trade with us? It can probably better figure out how to talk to ants.) However just because at some point AI systems will probably do everything humans do, doesn’t mean that this will happen on any particular timeline, e.g. the same one on which AI becomes ‘very powerful’. If the situation turns out similar to us and ants, we might expect that we continue to have a bunch of niche uses for a while.
In sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way. Is this what AI will be like? No. AI will be able to communicate with us, though at some point we will be less useful to AI systems than ants could be to us if they could communicate.
But, you might argue, being totally unable to communicate makes one useless, even if one has skills that could be good if accessible through communication. So being unable to communicate is just a kind of being useless, and how we treat ants is an apt case study in treatment of powerless and useless creatures, even if the uselessness has an unusual cause. This seems sort of right, but a) being unable to communicate probably makes a creature more absolutely useless than if it just lacks skills, because even an unskilled creature is sometimes in a position to add value e.g. by moving out of the way instead of having to be killed, b) the corner-ness of the case of ant uselessness might make general intuitive implications carry over poorly to other cases, c) the fact that the ant situation can definitely not apply to us relative to AIs seems interesting, and d) it just kind of worries me that when people are thinking about this analogy with ants, they are imagining it all wrong in the details, even if the conclusion should be the same.
Also, there’s a thought that AI being as much more powerful than us as we are than ants implies a uselessness that makes extermination almost guaranteed. But ants, while extremely powerless, are only useless to us by an accident of signaling systems. And we know that problem won’t apply in the case of AI. Perhaps we should not expect to so easily become useless to AI systems, even supposing they take all power from humans.
Appendix: potentially valuable things things ants can do