The benevolence of the butcher
A few days ago I published this post on the risks of powerful transformative AGI (by which I meant AGI that takes off fast and pretty much rules the world in no time), even if aligned. Among the comments there was one by Paul Christiano which I think was very interesting, but also focused on a different scenario, one of slower take off in which AGI stays with us as a regular part of our economy for a bit longer. This post is an elaboration of the answer I gave there, because it brought to my mind a different kind of risk. It's a common in rebuttals to pessimism about AGI to compare it to other past technologies, and how eventually they all ended up boosting productivity and thus raising the average human welfare in the long run (though I would also suggest that we do not completely ignore the short run: after all, we're most likely to live through it. I don't just want the destination to be nice, I want the trip to be reasonably safe!). I worry however that carrying this way of thinking to AGI might be a case of a critical error of extrapolation - applying knowledge that worked in a certain domain to a different domain in which some very critical assumptions on which that knowledge relied aren't true any more. Specifically, when one thinks of any technology developed during or after the industrial revolution, one thinks of a capitalist, free-market economy. In such an economy, there are people who mostly own the capital (the land, the factories, and any other productive infrastructure) and there are people who mostly work for the former, putting the capital to use so it can actually produce wealth. The capital acts as a force multiplier which makes the labour of a single human be worth tens, hundreds, thousands of times what it would have been in a pre-industrial era; but ultimately, it is still a multiplier. A thousand times zero is zero: the worker is still an essential ingredient. The question of how this surplus in productivity is to be split fairly between in