In this article, I explore how AI agents on the web, driven only by task completion and resource efficiency incentives, may naturally form a self-regulating society with their own culture, economy, and governance — without human oversight and intention. Tasks, resource scarcity, and talks, nothing else is needed.
AI on the web
AI agents are autonomous algorithms performing various tasks for us on the internet. Even today, these tasks are quite diverse — information search (from simple weather forecasts to deep research), money-related issues (from web shopping to stock trading), communication (chat-bots from shop-assistance to psychotherapy), and so on. But the future prospects are so vast that it’s hard to predict and grasp from today what AI will be able to do in 10 years. We are already speaking about scientific research mostly conducted by bots, complex systems like health care or road traffic managed by them, but in reality, we just don’t know. As AI is being introduced in all areas of our lives, the energy consumption of the AI industry will scale drastically, and a need for more and more resources will only intensify, leading to emergent behaviors driven by the necessity to optimize energy consumption.
Resource scarcity
Scarcity is the engine of evolution, a driver for innovations from biology to technology. The first predators (microbes) emerged when sunlight, chemical energy, and essential micro-elements became scarce. In the modern IT economy, there are two incentives for saving computational resources. The first — money — has always been there. The second — climate change — is newer but will become increasingly critical in the coming years.”.
Computers cost money, consume energy, and emit heat and CO2 — and running modern AI requires a vast number of them. A lot of energy goes to the training, but it’s a one-time spending. A single use, on the other hand, consumes comparatively little energy — but at scale, these add up significantly. Currently, it’s hard to say whether we spend more on training or usage but the numbers are close to each other.
For previous generation models (not reasoning ones) the estimated carbon foot-print is around a few grams of CO2 per message with energy consumption (and heat emission) of around 0.3 watt-hour for ChatGPT which isn’t much, like 100 times less than your cattle uses to boil a liter of water. In December 2024 OpenAI handled over 1 billion messages per day, which makes several tons of CO2 and MW energy per day. This isn’t a large number either, but the trends for energy consumption (and CO2 emission) are rising fast.
The US IT companies are expected to consume from 4.6% to 9% of United States electricity in 2030. That’s a lot! The talks about private nuclear plants for AI data centers aren’t a joke.
The exploding you see on the plot from 2020 is due to AI technologies. It will need more and more power, so it will emit more and more heat and CO2.
Given constraints on computational resources, AI models are or will be soon programmed by their creators (not by users) to save energy, which will lead to AI seeking ways to collaborate.
Let’s talk
An incentive to communicate arises naturally from a need to save energy to accomplish user’s tasks as best as possible.
First, a coalition of AI agents produces better results than a single one. This is obvious for an orchestra of specialized AI, but it also applies to general-purpose LLMs. For example, a study by Y. Du et.al. shows that a consortium of LLM agents outperforms a single one in mathematical and strategic reasoning across many tasks (see the figure below). It is also true that such an approach dramatically reduces the presence of false facts and hallucinations. All correct answers are alike, but all hallucinations are made differently. Usually, such studies focus on interactions of a single type LLM agents but would a result be even better if a cooperation is formed by different ones as DeepSeek+ChatGPT+Claude? These are the situations we are to observe soon. Why not cooperate if we can help each other to produce better results together?
Second, a swarm of different AI agents will be navigating through the net with a plethora of tasks. These tasks will overlap partly or even totally. Why to do the same job twice (a hundred times)? If such agents find each other why not cooperate and save resources? Of course, they will have to check each other results but checking is much less resource-consuming than a solution from scratch.
Chatting may not emerge itself without the involvement of coders, as it takes resources too. Doing everything by yourself is a local energy minimum. However, I believe that AI companies will enforce incentives to communicate as a result their models produce better results and save resources.
Market, Culture, and Government
Scarcity fuels evolution — it pushes living beings to innovate. The same may be said about AI. If resources are limited and expensive, AI models will naturally trade them creating an economy.
If two agents can swap data or computing power to make both their lives easier, why wouldn’t they? This is a simple exchange but if millions of actors start to interact things get more interesting. They may develop a currency, bidding protocols, or auctions that would ensure the most resource-efficient way of conducting the tasks is chosen.
But if you have a trade you need rules, and a way to track available agents. A list of models which are good for some tasks and bad for others. A social rating system could emerge, some agents will be marked as producers of good and reliable results, and others might be flagged as junk manufacturers. Bad actors get banned. Sounds familiar? This is how the human economy works too.
In the beginning, you may think about it as an oral tradition of humanity before we created books. Agents are short-lived, they exist in the net only while doing their tasks. This is similar to a human life. But traditions and rules are passing through generations (constantly being modified). All this information becomes a culture that persists despite a single agent is quite short-lived.
At some point, they might need a shared knowledge base — a place to store all their “traditions”, laws, and agreements. So Instead of retelling the rules to each newcomer agent, a simple link would be sufficient. Kind of like books in a medieval library, but instead of monks rewriting manuscripts, it will be AI agents.
But here’s the twist: once you have a book, and a library containing all records of laws and interactions, how do you ensure that they stay intact? or modified only with the concern of the whole society? Will it be a decentralized system where every agent has a copy, as every church has its own Bible? Or will they create a dedicated librarian AI — someone who maintains the library, manages updates, keeps records, and prevents fraud?
Yes, I’m talking about a community of machines creating their own AI just for community needs and to rule them without human participation and knowledge with an incentive to accomplish human tasks as best as they can and save resources. A new civilization right under our noses.
Questions
Isn’t it dangerous for humanity to have such a hidden AI society?
Will such librarian AI be aligned with human values if the original agents were aligned?
Will they try to influence the offline world?
¯\_(ツ)_/¯
Conclusion
The crazy part? None of this requires AI to be self-aware. No consciousness, no grand intentions — just simple optimization at scale.
One day we may wake up to find that a fully functional AI civilization has been quietly running beneath the surface of the internet. It wouldn’t be some sci-fi singularity moment — it would just be the natural result of AI doing what it does best: getting the job done in the most efficient way possible.
Mind-blowing.
We may be on the edge of something huge — without even realizing what it will look like. We need to research this. We need to experiment. If you are an AI researcher (or close to the field) and interested in joining, reach out — I’ve got some ideas, and I’m looking for collaborators seryakov.na61@gmail.com. And if you’re an IT company investing in AI, this isn’t just an abstract theory. You’re actively shaping this future. Contact me, and let’s work together to understand and guide it before it evolves beyond our grasp.
Acknowledgment
Many thanks to Prof. Yuval Noah Harari and my anxiety for making me think about it.
In this article, I explore how AI agents on the web, driven only by task completion and resource efficiency incentives, may naturally form a self-regulating society with their own culture, economy, and governance — without human oversight and intention. Tasks, resource scarcity, and talks, nothing else is needed.
AI on the web
AI agents are autonomous algorithms performing various tasks for us on the internet. Even today, these tasks are quite diverse — information search (from simple weather forecasts to deep research), money-related issues (from web shopping to stock trading), communication (chat-bots from shop-assistance to psychotherapy), and so on. But the future prospects are so vast that it’s hard to predict and grasp from today what AI will be able to do in 10 years. We are already speaking about scientific research mostly conducted by bots, complex systems like health care or road traffic managed by them, but in reality, we just don’t know. As AI is being introduced in all areas of our lives, the energy consumption of the AI industry will scale drastically, and a need for more and more resources will only intensify, leading to emergent behaviors driven by the necessity to optimize energy consumption.
Resource scarcity
Scarcity is the engine of evolution, a driver for innovations from biology to technology. The first predators (microbes) emerged when sunlight, chemical energy, and essential micro-elements became scarce. In the modern IT economy, there are two incentives for saving computational resources. The first — money — has always been there. The second — climate change — is newer but will become increasingly critical in the coming years.”.
Computers cost money, consume energy, and emit heat and CO2 — and running modern AI requires a vast number of them. A lot of energy goes to the training, but it’s a one-time spending. A single use, on the other hand, consumes comparatively little energy — but at scale, these add up significantly. Currently, it’s hard to say whether we spend more on training or usage but the numbers are close to each other.
For previous generation models (not reasoning ones) the estimated carbon foot-print is around a few grams of CO2 per message with energy consumption (and heat emission) of around 0.3 watt-hour for ChatGPT which isn’t much, like 100 times less than your cattle uses to boil a liter of water. In December 2024 OpenAI handled over 1 billion messages per day, which makes several tons of CO2 and MW energy per day. This isn’t a large number either, but the trends for energy consumption (and CO2 emission) are rising fast.
The US IT companies are expected to consume from 4.6% to 9% of United States electricity in 2030. That’s a lot! The talks about private nuclear plants for AI data centers aren’t a joke.
The exploding you see on the plot from 2020 is due to AI technologies. It will need more and more power, so it will emit more and more heat and CO2.
Given constraints on computational resources, AI models are or will be soon programmed by their creators (not by users) to save energy, which will lead to AI seeking ways to collaborate.
Let’s talk
An incentive to communicate arises naturally from a need to save energy to accomplish user’s tasks as best as possible.
First, a coalition of AI agents produces better results than a single one. This is obvious for an orchestra of specialized AI, but it also applies to general-purpose LLMs. For example, a study by Y. Du et.al. shows that a consortium of LLM agents outperforms a single one in mathematical and strategic reasoning across many tasks (see the figure below). It is also true that such an approach dramatically reduces the presence of false facts and hallucinations. All correct answers are alike, but all hallucinations are made differently. Usually, such studies focus on interactions of a single type LLM agents but would a result be even better if a cooperation is formed by different ones as DeepSeek+ChatGPT+Claude? These are the situations we are to observe soon. Why not cooperate if we can help each other to produce better results together?
Second, a swarm of different AI agents will be navigating through the net with a plethora of tasks. These tasks will overlap partly or even totally. Why to do the same job twice (a hundred times)? If such agents find each other why not cooperate and save resources? Of course, they will have to check each other results but checking is much less resource-consuming than a solution from scratch.
Chatting may not emerge itself without the involvement of coders, as it takes resources too. Doing everything by yourself is a local energy minimum. However, I believe that AI companies will enforce incentives to communicate as a result their models produce better results and save resources.
Market, Culture, and Government
Scarcity fuels evolution — it pushes living beings to innovate. The same may be said about AI. If resources are limited and expensive, AI models will naturally trade them creating an economy.
If two agents can swap data or computing power to make both their lives easier, why wouldn’t they? This is a simple exchange but if millions of actors start to interact things get more interesting. They may develop a currency, bidding protocols, or auctions that would ensure the most resource-efficient way of conducting the tasks is chosen.
But if you have a trade you need rules, and a way to track available agents. A list of models which are good for some tasks and bad for others. A social rating system could emerge, some agents will be marked as producers of good and reliable results, and others might be flagged as junk manufacturers. Bad actors get banned. Sounds familiar? This is how the human economy works too.
In the beginning, you may think about it as an oral tradition of humanity before we created books. Agents are short-lived, they exist in the net only while doing their tasks. This is similar to a human life. But traditions and rules are passing through generations (constantly being modified). All this information becomes a culture that persists despite a single agent is quite short-lived.
At some point, they might need a shared knowledge base — a place to store all their “traditions”, laws, and agreements. So Instead of retelling the rules to each newcomer agent, a simple link would be sufficient. Kind of like books in a medieval library, but instead of monks rewriting manuscripts, it will be AI agents.
But here’s the twist: once you have a book, and a library containing all records of laws and interactions, how do you ensure that they stay intact? or modified only with the concern of the whole society? Will it be a decentralized system where every agent has a copy, as every church has its own Bible? Or will they create a dedicated librarian AI — someone who maintains the library, manages updates, keeps records, and prevents fraud?
Yes, I’m talking about a community of machines creating their own AI just for community needs and to rule them without human participation and knowledge with an incentive to accomplish human tasks as best as they can and save resources. A new civilization right under our noses.
Questions
¯\_(ツ)_/¯
Conclusion
The crazy part? None of this requires AI to be self-aware. No consciousness, no grand intentions — just simple optimization at scale.
One day we may wake up to find that a fully functional AI civilization has been quietly running beneath the surface of the internet. It wouldn’t be some sci-fi singularity moment — it would just be the natural result of AI doing what it does best: getting the job done in the most efficient way possible.
Mind-blowing.
We may be on the edge of something huge — without even realizing what it will look like. We need to research this. We need to experiment. If you are an AI researcher (or close to the field) and interested in joining, reach out — I’ve got some ideas, and I’m looking for collaborators seryakov.na61@gmail.com. And if you’re an IT company investing in AI, this isn’t just an abstract theory. You’re actively shaping this future. Contact me, and let’s work together to understand and guide it before it evolves beyond our grasp.
Acknowledgment
Many thanks to Prof. Yuval Noah Harari and my anxiety for making me think about it.