Assume the goal of AI research is to build superintelligent decision-making systems. If we successfully develop and deploy such systems, it will likely no longer make sense for humans to work many, or even most, jobs. The duties humans must still perform will radically differ from those of today. In this essay, I am concerned with the transformation of labor resulting from superintelligent systems.

If superintelligent AI systems are created, our world economic and social systems will likely significantly depart from what they are today. Like the revolutions of the 20th century, the worker is at the center of desired reforms - whether this worker is an agrarian, proletariat, or knowledge worker. I will not speculate on what such a world would look like but rather will point out that the goal of AI research - which if achieved will result in a complete transformation of labor and the role of all workers - will radically change society in similar ways to what the 20th century social revolutions sought. Therefore I argue that the act of advancing the field of AI, of conducting AI research and building more intelligent systems, is an act of social revolution. Through this lens, I highlight that we, as scientists and engineers, as constructors of this revolution, have failed to justify to the workers that our mission is worthwhile. Important conversations are taking place, especially on this website, about how to mitigate the possible existential risks of superintelligent agents. But if we can build safe, superintelligent systems, what does this mean for the 21st-century worker?

Why is the act of AI research a social revolutionary act, as opposed to the act of developing previous technologies?
Technological revolutions have historically incited social revolutions. But being an engineer during the Industrial Revolution, for example, did not necessarily make you an agent of social revolution. I argue that this is not true for developers of AI today. The key difference is that the end goal of engineering efforts in the past, of say developing a more efficient steam engine, was to increase worker productivity and capabilities. The drive of the field centered around this. AI practitioners and corporations often promise that AI will also increase worker productivity and capabilities. But the key drive of this field is distinctly different: superintelligent AI systems will not only make workers more productive but we will likely reach a point where it is no longer profitable to use human workers for many or most economically significant jobs. Therefore the drive of the AI field, the end goal we aim to reach of developing superintelligent systems, inherently involves transforming the current system and division of labor. Such a departure from previous economic and social structures was entirely out of scope for engineers of the Industrial Revolution and instead was thought of by the philosophers and economists of the time. Today, however, AI technical progress and the radical transformation of labor are inseparable; to work towards the former is to work towards the latter.

What I see as the key problem with AI development as a social revolution. 
The revolutions of the 20th century aimed to return society to some golden age of human history, such as the Marxist conception of the State of Nature. The desired end state of these revolutions had to exist at some point in human history: this offers a justification that such a state could be achieved in the future. And with such a justification these revolutions, at least initially, often enjoyed popular support. Critics of these revolutions could also pinpoint flaws in the envisioned goal or execution.

AI research does not drive toward returning to any past state. I have not heard a widely accepted notion that the advent of superintelligent decision-makers will return us to the State of Nature or some other golden age that once was. Instead, AI research seeks to create a new state of mankind - one that lacks any historical justification because it has never existed before. And this lack of justification makes it harder to reason that we will end up in a better state than we are in now. We have failed to justify to society’s workers that our drive to build more intelligent systems is a pursuit in their favor.

Why does reframing AI development as an act of social revolution matter?
First and foremost we, as scientists and engineers, are crafting a social revolution yet this is not acknowledged. We lack a clear picture of where we are going, or what the world should look like if we succeed in developing safe, superintelligent systems AI. We are not giving the workers, whose lives we will transform, an ideology or image of their future to rest on. AI practitioners should understand their place in this social transformation. The act of proving a theorem or developing a new algorithm is an act towards social revolution, not just towards scientific progress. I hope that by discussing this now we do not stumble into a vanguard-elite-like dictatorship over workers or popular resistance against progress in AI.

New Comment