Two-sentence summary
In Recursive Middle Manager Hell, Raemon argues that organisations get increasingly warped (immoral/maze-like/misaligned/...) as they gain more levels of hierarchy.[1] I conjecture that analogous dynamics also arise in the context of AI automation.
To be frank, I think 95% of the value of my post comes just from the two-sentence tl;dr above. If you haven't done so yet, I recommend reading Recursive Middle Manager Hell and then asking "how does this change in the context of automating jobs by AI". But let me give some thoughts anyway.
Automation so far
[low confidence, informal] It seems to me that automation so far mostly worked like this:
- Mostly on the level of individual tasks (eg, "Google creating a calendar event based on me receiving flight tickets via email", as opposed to "Google automatically handling all of my calendar events").
- Mostly without very long chains with no human in the loop (eg, the calendar example above, but mostly not long chains of Zapier macros).
- Mostly trying to imitate humans, rather than find novel solutions (eg, when Google puts a flight into my calendar, it does the same steps I would do, except automated. When Excel computes stuff for me, it likewise just performs calculations that I came up with.).
I think this is starting to change. On the abstract level, I would say that (3) has changed with the introduction of ML, and (1) & (2) will now be changing because LLMs are growing more general and capable.
Examples
But perhaps it is better to instead give more specific examples of "recursive middle automaton hell":
- We already have some companies using ML to do the initial screening of CVs for job applications. And some companies might also use AI to do the same for (job-application) cover letters etc. Similarly, some people are starting to use LLMs for fine-tuning their CVs or writing their cover letters. But this is still somewhat rare, so most of these interactions are still human-human or human-AI. What happens once most of these interactions are with black-box AIs on both sides?
- One of the joys of academic work is [writing recommendation letters, reading recommendation letters, writing grant proposals, reviewing grant proposals, writing grant reports, and reviewing grant reports. Aaaah!]. Most work in each of these tasks is boring, annoying, and monotonous. Let's automate it, hurray! ... but what happens once both sides of this are automated?
- Both of the examples above only automate two levels of "hierarchy". And I have never worked for a large corporation, so I don't have good intuitions for how things are likely to go there. But I expect we will see longer chains, of automating poorly specified tasks, using black-box models, with unclear feedback on the overall process. What happens then?
What next?
In a sane world with lots of time to investigate this, this sounds like an exciting research topic for game theory, mechanism design, and whole lot of other fields! In the actual world, I am not sure what the implications are. Some questions that seem important are:
- Which assumptions are needed for "recursive middle manager hell", and which of those are likely to hold for AI?
- In particular, how likely are we to see long chains of automation, with no human in the loop?
- If "recursive middle automaton hell" is a thing, how is it going to interact with the human version, "recursive middle manager hell"? (Will it displace it? Make better? Will the negative effects add up, additively? Will the effects feed on each other, making things go completely crazy?)
- ^
The proposed mechanism for this is that the only pieces of the organisation that are "in touch with reality" are the very top and bottom of the hierarchy (eg, the CEO/the people creating the product), while everybody else is optimising for [looking good in front of their direct supervisor]. Because of various feedback-cycles, this --- the claim goes --- results in misalignment that is worse than just "linear in the depth of the hierarchy".
For a better description of this, see Raemon's Recursive Middle Manager Hell or Zvi's (longer and more general) Immoral Mazes.
(1) « people liking thing does not seem like a relevant parameter of design ».
This is quite a bold statement. I personally believe the mainstream theory according to which it’s easier to have designs adopted when they are liked by the adopters.
(2) Nice objection, and the observation of complex life forms gives a potential answer :
Given that all your cells welcome even literal kill-switch, and replacement, I firmly believe that they don’t mind surveillance either!
In complex multicellular life, the Cells that refuse surveillance, replacement, or Apoptosis, are the Cancerous Cells, and they don’t seem able to create any complex life form (Only parasitic life forms, feeding off of their host, and sometimes spreading and infecting others, like HeLa).