First, a few words about me, as I’m new here. 

I am a professor of economics at SGH Warsaw School of Economics, Poland. Years of studying the causes and mechanisms of long-run economic growth brought me to the topic of AI, arguably the most potent force of economic growth in the future. However, thanks in part to reading numerous excellent posts on Less Wrong, I soon came to understand that this future growth will most likely no longer benefit humanity. That is why I am now switching to the topic of AI existential risk, viewing it from the macroeconomist’s perspective.

The purpose of this post is to point your attention to a recent paper of mine that you may find relevant. 

In the paper Hardware and software: A new perspective on the past and future of economic growth, written jointly with Julia Jabłońska and Aleksandra Parteka, we put forward a new hardware-software framework, helpful for understanding how AI, and transformative AI in particular, may impact the world economy in the coming years. A new framework like this was needed, among other reasons, because existing macroeconomic frameworks could not reconcile past growth experience with the approaching perspective of full automation of all essential production and R&D tasks through transformative AI. 

The key premise of the hardware-software framework is that in any conceivable technological process, output is generated through purposefully initiated physical action. In other words, producing output requires both some physical action and some code, a set of instructions describing and purposefully initiating the action. Therefore, at the highest level of aggregation the two essential and complementary factors of production are physical hardware ( “brawn”), performing the action, and disembodied software (“brains”), providing information on what should be done and how. 

This basic observation has profound consequences. It underscores that the fundamental complementarity between factors of production, derived from first principles of physics, is cross cutting the conventional divide between capital and labor. From the physical perspective, it matters whether it's energy or information, not if it's human or machine.

Obraz zawierający tekst, zrzut ekranu, Czcionka, numer

Zawartość wygenerowana przez sztuczną inteligencję może być niepoprawna.

For any task at hand, physical capital and human physical labor are fundamentally substitutable inputs, contributing to hardware: they are both means of performing physical action. Analogously, human cognitive work and digital software (including AI) are also substitutes, making up the software factor: they are alternative sources of instructions for the performed action. It is hardware and software, not capital and labor, that are fundamentally essential and mutually complementary.

The hardware-software framework involves a sharp conceptual distinction between mechanization and automation. Mechanization of production consists in replacing human physical labor with machines within hardware. It applies to physical actions but not the instructions defining them. In turn, automation of production consists in replacing human cognitive work with digital software within software. It pertains to cases where a task, previously involving human thought and decisions, is autonomously carried out by machines without any human intervention. 

The various tasks are often complementary among themselves, though. At the current state of technology, some of them are not automatable, i.e., involve cognitive work that must be performed by humans. Hence, thus far, aggregate human cognitive work and digital software are still complementary. However, upon the emergence of transformative AI, allowing for full automation of all economically essential cognitive tasks, these factors are expected to become substitutable instead. 

The hardware-software framework nests a few standard models as special cases, i.a., the standard model of an industrial economy with capital and labor, and a model of capital-skill complementarity.

From the policy perspective, the framework can inform the debate on the future of global economic growth, in particular casting some doubt on the “secular stagnation” prediction, still quite popular in the economics literature.

In the paper, we proceed to quantify the framework’s predictions empirically, using U.S. data for 1968-2019. 

An important strength of the framework, and one that is probably most relevant for the Less Wrong audience, lies with its ability to provide some crisp predictions for a world with transformative AI.

Namely, in the baseline case the hardware-software framework suggests that transformative AI will accelerate the economic growth rate, likely by an order of magnitude – eventually up to the growth rate of compute (Moore’s Law). It also suggests that upon the emergence of transformative AI, human cognitive work and AI will switch from complementary to substitutable. People would then only find employment as long as they are price competitive against the AI. The framework also suggests that with transformative AI, the labor income share will drop precipitously toward zero, with predictable implications for income and wealth inequality.

Of course, the latter two predictions hold only under the assumption that existential risk from misaligned TAI does not materialize earlier.

I hope you will find this research relevant. Thank you.

New Comment