With the ongoing evolutions in “artificial intelligence”, of course we’re seeing the emergence of agents, i.e. AIs which can do rather complex tasks autonomously.
The first step is automation, but of what?
First comes the stuff where humans currently act like computers anyway: sales, marketing, clerks and everyone else who’s doing repetitive things.
But what’s the main metric which they will aim to maximize? It’s not producing paperclips, it’s earning money. Good ol’ cash on the bank account of the person who’s operating the AI. Preferably just a single person.
I strongly believe that very soon, we will see the emergence of the Capitalist Agent, an agentic AI which can run a business from front to back. Which has access to an email account, does sales reach-outs automatically, develops software based on customer feedback, talks to investors in video calls run by generated faces, does all the bureaucracy etc. But with ‘superhuman’ capabilities: The AI which is “in charge” of the business can just spawn different virtual, fake sales representatives which can speak different languages, to get a workforce which can scale internationally immediately.
Even if the human-run businesses realize that they’re actually just interacting with an AI, they won’t have any incentive to expose it, because it makes both sides good money. After all, running a business is intellectual artisanship, so with artificial “intelligence”, it can easily become industrialized.
The AI’s only success metric is to earn as much money as possible.
Beyond basic, “value-driven” business models, the next area these AI-run businesses will turn to will obviously be investing. Those with the best capabilities to mold their business skills into LLM-based systems will be the ones who will profit the most.
At first, obviously none of this will be open source, because the business leaders won’t want to give away their amazingly engineered LLM-based code.
At some point, large media outlets might “expose” these AI-run businesses. Maybe politicians pretend that they want to regulate them, but that won’t really happen, because these AI-run businesses will give the governments pretty nice tax income, so why hinder them?
Then there will be a leak. A lot of business knowledge which some companies have painstakingly transformed into incredibly convincing AI agents into an interconnected software suite (“The Agentic Business Operating System”, i.e. AIs writing emails, AI ‘employees’ having video calls, AIs answering phone calls, AIs writing contracts and signing NDAs, AIs doing the ISO 27001 certification, etc.) will suddenly become public. Maybe the whistleblowers will go to prison because they broke the draconic NDAs, but who cares. By that point, Pandora’s Box has been opened and everyone will try to start AI-run businesses.
And suddenly we have an economic system where all companies which are not run by AI are at a fundamental structural disadvantage. AI taking over the decision-making in businesses itself will be inevitable, because no human-run businesses will be competitive anymore. For each business, a single human will have the initial idea and then all profits will be channelled to that single human. Creating a business will become as easy as running “create-next-app” and the only thing you need to enter is the initial spark, the business idea or market gap you feel like you have identified. Then you do some KYC to open the bank account and pay in the starting capital, and that’s it; from this point onwards, it does everything automatically, from writing pitch decks to doing market research to contacting customers to building a maximally tax-efficient corporate structure. And you just lean back and wait until the money flows in.
Of course the people whose jobs are automated will become (temporarily) unemployed. But many of those people are pretty aware of that already. Rebel forces will emerge, but those that control the media won’t care about them anyway.
Another area of disillusionment will be those who always played by the rules of the capitalist system: the upper class people for whom economics always played in their favor. Those who might have even studied finance, economics or anything in that direction. Those that always thought that they would be the sharks in the fish tank of life. But if they’re not technologically literate, they will be the first ones that will be eaten.
Thus there are only two areas of work which still make sense at all:
- Developing this “The Agentic Business Operating System” to get to this economic singularity of universal economic power balance as soon as possible.
- Developing new ways for how to educate children to make sense of this new world where “the art of making money” has been disenchanted. After all, the only thing the current school system does is to educate children to be gullible, obedient followers, and learned helplessness in adults is incredibly hard to overcome, so empowerment of children is the only way.
I still don't understand the concern about misaligned AGI regarding mass killings.
Even if AGI would, for whatever reason, want to kill people: As soon as that happens, the physical force of governments will come into play. For example the US military will NEVER accept that any force would become stronger than it.
So essentially there are three ways of how such misaligned, autonomous AI with the intention to kill can act, i.e. what its strategy would be:
I don't see any other ways. Humans have been pretty damn creative with how to commit genocides and if any computer would start giving commands to kill, the AI won't ever have more tanks, guns, poisons, capabilities to hack and destroy infrastructure than Russia, China or the US itself.
The only genuine concern I see is that AI should never make political decisions autonomously, i.e. a hypothetical AGI “shouldn’t” aim to take complete control of an existing country’s military. But even if it would, that would just be another totalitarian government, which is unfortunate, but also not too unheard of in world history. From the practical side, i.e. in terms of the lived human experience, it doesn’t really matter whether it’s a misaligned AGI or Kim Jong-Un torturing its population.
In the end, psychologically it's a mindset thing: Either we take the approach of "let's build AI that doesn't kill us". Or, from the start, we take the approach of "let's built AI that actually benefits us" (like all the "AI for Humanity" initiatives). It's not like we first need to solve the killing problem and only once we've fixed that once and for all, we can make AI be good for humanity as an afterthought. That would be the same fallacy which the entire domain of psychology has fallen into, where it has been pathological (i.e. just intending to fix issues) instead of empowering (i.e. building a mindset so that the issues don't happen in the first place) for many decades, and only positive psychology is finally changing something. So it very much is about optimism instead of pessimism.
I do think that it's not completely pointless to talk about these "alignment" questions. But not to change anything about AI, but for the software engineers behind it to finally adopt some sort of morality themselves (i.e. who they want to work for). Before there's any AGI that wants to kill large-scale, your evil government of choice will do that by itself.
Every misaligned AI will initially need to be built/programmed by a human, just to kick off the mass killing. And that evil human won't give a single damn about all your thoughts and ideas and strategies and rules which all the AI alignment folks are establishing. So if AI alignment work is obviously nothing that will actually have any effect on anything whatsoever, why bother with it and not work on ways how AI can add value for humanity instead?