Review

I think Zvi's Immoral Mazes sequence is really important, but comes with more worldview-assumptions than are necessary to make the points actionable. I conceptualize Zvi as arguing for multiple hypotheses. In this post I want to articulate one sub-hypothesis, which I call "Recursive Middle Manager Hell". I'm deliberately not covering some other components of his model[1].

tl;dr: 

Something weird and kinda horrifying happens when you add layers of middle-management.  This has ramifications on when/how to scale organizations, and where you might want to work, and maybe general models of what's going on in the world.

You could summarize the effect as "the org gets more deceptive, less connected to its original goals, more focused on office politics, less able to communicate clearly within itself, and selected for more for sociopathy in upper management."

You might read that list of things and say "sure, seems a bit true", but one of the main points here is "Actually, this happens in a deeper and more insidious way than you're probably realizing, with much higher costs than you're acknowledging. If you're scaling your organization, this should be one of your primary worries."

The Core Model

Say you have a two-layer company, a CEO and a widgetmaker. The CEO is directly in contact with reality – his company either is profitable or not. He can make choices about high-level-widgetmaking strategy, and see those choices play out in customers that buy his product. 

The widgetmaker is also in direct contact with reality – he's got widgets to make. He can see them getting built. He can run into problems with widget-production, see that widgets are no longer getting made, or getting made worse. And then he can fix those problems.

Add one middle manager into the mix.

The middle manager is neither directly involved with widget-production, or the direct consequences of high-level widget strategy. Their feedback loop with reality is weaker. But, they do directly interact with the CEO, and the widgetmakers. So they get exposed to the object level problems of widget-making and company-profit-maximizing.

Hire a lot of widgetmakers, such that you need two middle managers. Now, the middle managers start talking to each other, and forming their own culture. 

Scale the company enough that you need two layers of middle-managers. Now there's an upper layer who reports to the CEO, but the things they reports about are "what did the lower-middle-managers tell me?". The lower layer talks directly to the widgetmakers, and reports down what they hear about high level strategy from upper management.

Lower middle management wants promotions and raises. Upper management isn't directly involved with the process of widgetmaking, so they only have rough proxies to go on. Management begins constructing a culture about legible signals of progress, which begin to get goodharted in various ways

Then, scale the company enough that you have three layers of middle management. 

Now, in the center of the hierarchy are people who never talk to someone who's directly engaged with a real-world problem. And there are multiple levels, which create a ladder for career advancement. Middle management culture develops which is about career advancement – people rise through the ranks of that culture if they prioritize career advancement as their goal, trading off against other things. Those people end up in charge of how career advancement happens, and they tend to promote people who are like them. 

You can fudge numbers to make yourself look good, and because nobody is in direct contact with the reality, it's hard to tell when the numbers are bullshit. It's really hard to evaluate middle managers, and even if you could, evaluation is expensive. If you're the CEO, you have a million other tasks to do and fires to fight. So even if you're trying hard to prevent this from happening, if you're scaling the company, it's likely happening anyway, outside the places you're trying to evaluate.

(Now, imagine that instead of inventing widgets, you're trying to ensure that the EA community has a positive impact, or that AI Alignment gets solved, or some other vague goal with terrible feedback loops instead of physical widgets that clearly either work or don't work, and are either selling or not selling).

See Zvi's The Road to Mazedom for a more detailed review of what this might look like.

Examples

Things are only just starting to get weird, but let's pause here for some illustrative anonymized anecdotes:

1. Alice, and manager encouraging "legible" achievements.

A friend of mine ("Alice") worked at Google. In a meeting with her manager, he said "This new project you're working on will be good for you. It's very concrete and will look good for your career." He was treating this as a favor he was offering her, that he expected her to appreciate. He thought part of his job was helping his employees advance. And, notably, there wasn't any focus on "and this project is actually valuable, or the research is actually interesting." The focus was on legible currency.

Alice felt like whole company seemed to be pressuring her into becoming the sort of person who cared about those legible achievements.

2. Beth, and managers outright lying.

Another friend ("Beth") worked at 500-ish employee company. Once, she had a good manager ("Bob") who had a clear understanding of strategy, cared about the product and the company vision, etc. Then, he got promoted, and she got a new manager ("Barnum") who was clearly a careerist. Barnum would actively distort the numbers about how their team was doing, framing things misleading and sometimes actively lying.

Beth tried to bring this up in her periodic skip-level meetings with original manager Bob. But it was awkward to say explicitly "Yo, my manager seems to be outright lying sometimes?" – it'll likely become a protracted conflict that made her life much worse. I don't remember the exact details of how this played out but I think she waited months to bring it up, brought it up sort of gingerly at first, and even when she was more concrete, Bob was basically too busy to take action on it.

3. Charlie, and more managers being deceptive.

I know of an EA-org that once hired a mid-level person ("Charlie") to manage office stuff, who I ended up collaborating with on joint-projects sometimes. Occasionally there'd be a logistical problem we were running into, that I'd want to talk about in the general project slack-channel. Charlie would message me in DMs asking to not talk about it public channels, because it'd look bad to Charlie's higher-level manager that we weren't on top of things. 

Some other colleagues and I had similar experiences with Charlie. Eventually someone talked to Charlie's managers about it. Eventually, several months later, they let Charlie go. But the whole thing took several months to play out.

Note that both this and the previous anecdote involve "It's pretty effortful to deal with conflict in an org, which means cultural problems can sit around, unsolved, for a long while." If you're scaling fast, and it takes you 5 months to resolve a problem (either by firing someone, or iterating on intense feedback and figuring out how to resolve it), and there are multiple such problems, they might create bad cultural effects that propagate faster than you can deal with them.

4. David, and a culture of not-especially-truthseeking

David was an AI alignment researcher. After getting used to "rationalist" culture where it's highly encouraged to ask people to be more specific, or ask questions like "why do you believe that?", they joined a new org where that came across as kinda aggressive. It was hard to get clarity on what people were actually talking about, and figure out when their ideas made sense or not.

There was also a general sense that workplace conversations had more of an undercurrent of "we're playing some kind of status game", and they felt more need to be strategic about what they said.

I don't think this was necessarily a middle-management driven problem – this is just how a lot of human cultures are, by default. But I bring it up here to highlight the base level of obfuscation you can expect in an organizational culture, before middle management goodharting starts to warp it further.

5. Me, and organizational spaghetti code

When I worked at Spotify, I was hired to help build a tool to automate the process wherein new employees got all the correct permissions, software, hardware, and other onboarding. There were finicky details that made this hard – it never quite reached a point where it worked 100% reliably enough that the IT department could switch to using it without checking everything by hand.

Meanwhile, in another Spotify office, another IT department was working an a project that was trying to solve a different problem, but also needed to control whether employees had the correct permissions for our Enterprise Google Drive setup, which was kinda redundant.

Both of us had incentive to grow our project over time to encompass more things. i.e. once you'd built a bunch of basic infrastructure, it felt kinda silly not to use it to solve more problems that benefited from a single source of truth.

Also if we grew the scope of our respective projects, we each looked/felt more important, and got to claim more credit for a bigger company impact.

Eventually we both bumped into each other and noticed we were doing duplicate work, and were faced with some options:

  • Just do the duplicate work
  • Try to merge our projects
  • Have one of us stop our project

And, like, guys I'm a rationalist and I try to be a good, practical person. But it was amazing a) how triggered I felt about the other guy who was 'enroaching on my turf', b) how easily justifications came to me that my project should be the one to survive and was better than his, and why my excuses for why it was taking forever were more justifiable than his excuses for why his was taking forever.

I think we both went on building our tools for another year, and somehow they never quite reached fruition.

Deepening over time

So, to recap here, we have a few things happening:

  • Managers are hard to evaluate.
  • Managers are comparatively incentivized to spend much of their time thinking about the social/political world of the internal company.
  • If you have hierarchy in a company, regardless of whether people are "middle managers" per se, there's a tendency for people to come to care about advancing in the hierarchy. It's a natural thing to want to do.
  • Most people aren't doing that good a job tracking reality or having coherent goals in the first place.
  • Managers start to goodhart on their objectives – some by accident, some deceiving themselves, some actively lying.
  • Managers who prioritize advancing in the company tend to get promoted, and then hire more people like themselves – people who are either willing to lie, or are likely to self-deceive into confusedly goodharting. 

On top of all that, we have the usual run-of-the-mill "Principal-Agent problems", where it's hard to hire someone to go off and strategically do complicated stuff on your behalf.

The Recursive Middle Manager Hell hypothesis doesn't merely say "hire too many middle managers and your company starts to goodhart/lose-purpose/become-deceptive." It starts with that, but then the problem doubles in on itself, recursively creating a company where these problems compound, worse than the sum of their parts.

The second generation of the company is built on a culture where "advance in the hierarchy" (rather than focus on the "core goals of the company") is implicitly the main thing to be doing. "Do stuff that seems legibly valuable" becomes the main currency, rather than "do stuff that is actually valuable." (And where "begin to goodhart, confusing yourself about whether you're actually doing something valuable" begins becoming an implicit part of the culture).

The third generation takes it a step further. Now that "Do stuff that is legibly good" is the story people are explicitly telling each other, it becomes the substrate of a culture where "pretend to do stuff that is legibly good" is what people are actually doing. Eventually, many people pick up on the fact that "pretend to do legible good" is the real game, and they start making choices that assume other managers are also pretending.

And finally you might reach a generation where "pretend to produce legible good" is just a piece in a game that is essentially disconnected from reality. People don't even think of it as "pretending" anymore, it's just what people in-the-know do. The upper management who control most of the company structure build an ecosystem based around loyalty and trading favors, but which uses "make powerpoints describing the results of your projects" as a sort of token to be manipulated. The company continues to produce value incidentally through inertia, but it's now much harder to steer, and there is a lot of inefficiency and waste. If the world changes significantly it'll have a harder time pivoting. If it does successfully pivot, it'll probably be executed through a small department that works more independently, in spite of company culture.

In practice these generations don't come in discrete stages. But I find it helpful for thinking about how generations of company hiring and cultural accumulation might layer on top of each other. (Side note: these stages roughly correspond to the levels in Baudrillard's Simulacra and Subjectivity , which some have found useful for modeling how language is used. See Simulacra Levels and their Interactions  and Simulacrum 3 As Stag-Hunt Strategy for further detail)

I'll flag that each generation here is essentially an additional sub-hypothesis, which you can accept or reject independently. I personally think it's pretty likely that all four layers happen at least sometimes. I also think, whether or not the generations progress in the same way, the notion that a company culture will undergo some kind of phases, where each generation of hiring attracts and builds on the culture that the previous generation established (and gets harder to steer), seems likely true, independent of exactly how it plays out.

Hard to reverse, and hard to talk about

Zvi calls the progress down this path "raising maze levels", inspired by the book Moral Mazes, which explores a few case studies of companies with many layers of middle management, where these pathologies got very extreme.

Say you're a CEO, or otherwise in company leadership trying to ensure your company can communicate clearly, focus on producing object-level value, etc. It's much easier to stop the culture from progressing down this path, than to reverse the culture once it's taken root. This is for a few reasons:

  1. The more people are at your company, the more people you either have to change the behavior of, or fire/replace. So more people straightforwardly equals "more work."
  2. If upper management is actively benefiting from the new culture and in fact helped create it, then you don't just have to do the linear work of changing habits or firing people. You need to fight an entrenched power structure that will actively oppose you.
  3. By default, people are just pretty crap at telling the difference between "actually working on a problem for real" and "confabulating reasons why their pretend work is useful." They may literally not know the difference. So if you talk to people about how you need to fix the company culture, and not exaggerate/lie/goodhart on objectives, they may nod sagely... and then go back to doing pretty much what they were doing before, and not notice the difference.
  4. Once the maze culture takes root, people become even more crap at noticing when they're confabulating, deceiving or goodharting. They're incentivized not to notice, they're incentivized not to care if they do, and they're incentivized to look the other way if others are.
  5. The entrenched power structures that benefit from higher maze levels will take advantage of #3 and #4, equivocating between different claims in a way that is plausibly deniable and hard to pin down. 

Implications for EA and AI

There are many more details here, but I want to keep this reasonably short while emphasizing my key takeaways. 

I think it is sometimes appropriate to build large organizations, when you're trying to do a reasonably simple thing at scale. 

I think most effective altruist and AI alignment organizations cannot afford to become mazes. Our key value propositions are navigating a confusing world where we don't really know what to do, and our feedback loops are incredibly poor. We're not sure what counts as alignment progress, many things that might help with alignment also help with AI capabilities and push us closer to either hard takeoff or a slow rolling unstoppable apocalypse.

Each stage of organizational growth triggers a bit less contact with reality, a bit more incentive to frame things so they look good.

I keep talking to people who think "Obviously, the thing we need to do is hire more. We're struggling to get stuff done, we need more people." And yes, you are struggling to get stuff done. But I think growing your org will diminish your ability to think, which is one of your rarest and most precious resources.

Look at the 5 example anecdotes I give, and imagine what happens, not when they are happening individually, but all at once, reinforcing each other. When managers are encouraging their researchers to think in terms of legible accomplishments. When managers are encouraging their researchers or programmers to lie. When projects acquire inertia and never stop even if they're pointless, or actively harmful – because they look good and even a dedicated rationalist feels immense pressure to make up reasons his project is worthwhile. 

Imagine if my silly IT project had been a tool or research program that turned out to be AI capabilities accelerating, and then the entire company culture converged to make that difficult to stop, or talk plainly about, or even avoid actively lying about it.

What exactly do we do about this is a bigger post. But for now: If your instinct is to grow – grow your org, or grow the effective altruism or AI safety network, think seriously about the costs of scale.

I recommend Zvi's Protecting Large Projects Against Mazedom for concrete advice here. (In general I recommend reading the whole sequence, although it starts off a couple posts that make less obvious claims about superperfect competition, which I'm less confident in and don't think it is necessary to get the rest of the model)

I'll end by summarizing the highlights from Protecting Large Projects:

  1. Do less things and be smaller
  2. Minimize levels of hierarchy
  3. Skin in the game
  4. Soul in the game
  5. Hire and Fire Carefully
  6. Promote, Reward and Evaluate Carefully
  7. Fight for Culture
  8. Avoid Other Mazes
  9. If necessary, Start Again

Future work?

A thing on my mind is, I expect a lot of people to have taken at least a brief look at these arguments, and been like "I dunno, maybe, but scaling organizations still seems really useful/important, and I don't know that I buy the effects here are strong enough to outweigh that."

And... that's super fair! The arguments here are pretty abstract and handwavy. I think the arguments here are good enough to promote this as a serious hypothesis. But I think it's kinda reasonable for most people's actual guesses about the world to be informed more by their broader experience of what orgs tend to be like.

I think it'd be fair ask "okay, cool, but can you go do some real empirical work here to see how reliably Moral Maze problems tend to come up, and how strong the effect size is?". I think this is maybe a thing worth putting some serious research time on. But, in order for that to be useful, there needs to be a real person with some real cruxes, and the data-gathering needs to actually address those cruxes. 

So, if you are someone running a company, or hiring, and you could be persuaded of the Recursive Middle Manager Hell Hypotheses but want to see some kind of data... I'm interested in what sort of evidence you'd actually find compelling.

  1. ^

    Zvi had at few more major components/hypotheses in his Immoral Maze models, not covered here, which include:

    •  "superperfect competition and the elimination of slack"
    • "moral mazes being particularly soulsucking and destructive of human value"
    • "motive ambiguity as a tool for upper management to test loyalty."

    I found them all at least somewhat plausible, but harder to argue for, and wanted to keep the post short.

New Comment
46 comments, sorted by Click to highlight new comments since:

I used to be a middle manager at Google, and I observed mazedom manifesting there in two main ways:

  1. If you try to make your organization productive by focusing your time on intensively coaching the people under you to be better at their jobs, this will make your org productive but will not result in your career advancement. This is because nobody at the level above you will be able to tell that the productivity increase is due to your efforts-- your reports' testimony to this effect will not provide appropriate social proof because they are by definition less senior than you. To advance your career you must instead give priority to activities which call you to the attention of those who can provide that social proof. This is called "managing up and across."

  2. In order to ensure that the organization works in consistent, fair, legal, ethical, and legible ways, corporate policy in a multilayer organization tends to put more guardrails around the behavior of middle managers than on those either above or below them. This strips those middle managers of the feeling of agency and autonomy which might otherwise provide a non-ladder-climbing intrinsic motivation to do the work. Thus it strengthens the selection pressure for those whose main motivation is ladder-climbing.

[-]kjz312

If you have hierarchy in a company, regardless of whether people are "middle managers" per se, there's a tendency for people to come to care about advancing in the hierarchy. It's a natural thing to want to do.

I would take this a step further and say that once maze levels are high enough, it essentially becomes a requirement to care (or at least pretend to care) about advancing in the hierarchy. Instead of advancement being something that some employees might want and others might not want, it becomes almost an axiom within the organization that everyone must strive for advancement at all times. But although advancement can be a natural thing to want, it's certainly not a universal thing to want. And for people like me who aren't strongly motivated by their place in the hierarchy, this can lead to a lot of conflict, stress, and low morale.

When I was a kid (maybe around 10) I learned about the Peter Principle, how everyone in an organization gets promoted to the level of their incompetence. I thought that was one of the saddest things I'd ever heard. Why would everyone try so hard to get promoted to a role they weren't good at? Just for the extra money? I decided that when I started working, I would rather stay in a role I was good at and enjoyed on a day to day basis than get promoted to a managerial role which already sounded awful, even if it meant staying at a lower salary.

Once in the maze, however, I found it a lot harder to stay in my happy, productive role than I was expecting. I constantly felt pressure to want to get promoted. But I secretly didn't want to, because that would mean spending less time doing the actual hands-on work that I liked and more time spent in the maze world interacting with other managers. This led to a lot of tension with my bosses. They couldn't comprehend why anyone wouldn't be excited about getting promoted. Higher level jobs were just better; why couldn't I see that? But to me, they weren't better and I couldn't get them to see my perspective. Ironically, their desire to promote me incentivized me to be less productive than I would have been otherwise - if we had been able to come to an agreement where I could stay in my desired role, I would have been more motivated to work harder without the fear of accidentally getting promoted too quickly.

This was all very frustrating and confusing to me for a long time. Eventually I came across the Moral Mazes sequence and the Gervais principle, which together seemed to explain a lot of what I was experiencing and ultimately gave me the courage to leave that organization.

Anyway, that's my story of working in a maze - happy to discuss further if this was useful or informative.

Goldman Sachs explicitly works on an up-or-out system: if you work in a low-level position and you haven't been promoted after a certain length of time, you're automatically fired.

Yeah having this as a concrete illustrative anecdote seems helpful.

Lack of contact with base-level reality sounds like something that could maybe be mitigated with a kind of sampling, where e.g. some of the people at the bottom object-level matters go and interact a bunch with some of the people in the top-level matters such as the CEO. This won't solve the problem for the CEO in getting good information because the sampling will inherently be narrow, but it seems like it would make them able to notice when the information streams are broken.

I read about things like cross-level meetings between levels X and X+2, Elon Musk randomly grilling his engineers about the work they’re doing in the moment, or “listening tours” in which the CEO who’s accomplished a corporate takeover will travel to all the company locations and listen to the lower-level employees’ takes on corporate dysfunction and what to do about it. These seem like examples of the “sampling” you’re talking about.

I think a takeaway here is that organizational maze-fulness is entropy: you can keep it low with constant effort, but it's always going to increase by default.

Apparently Jeff Bezos used to do something like this with his regular "question mark emails", which struck me as interesting in the context of an organization as large and complex as Amazon. Here's what it's like from the perspective of one recipient (partial quote, more at the link):

About a month after I started at Amazon I got an email from my boss that was a forward of an email Jeff sent him. The email that Jeff had sent read as follows:

“?”

That was it.

Attached below the “?” was an email from a customer to Jeff telling him he (the customer) takes a long time to find a certain type of screws on Amazon despite Amazon carrying the product.

A “question mark email” from Jeff is a known phenomenon inside Amazon & there’s even an internal wiki on how to handle it but that’s a story for another time. In a nutshell, Jeff's email is public and customers send him emails with suggestions, complaints, and praise all the time. While all emails Jeff receives get a response, he does not personally forward all of them to execs with a “?”. It means he thinks this is very important.

It was astonishing to me that Jeff picked that one seemingly trivial issue and a very small category of products (screws) to personally zoom in on. ...

Related: https://en.wikipedia.org/wiki/Gemba

For what it's worth, I think a naïve reading of this post would imply that moral mazes are more common than my experience indicates.

I've been in middle management at a few places, and in general people just do reasonable things because they are reasonable people, and they aren't ruthlessly optimizing enough to be super political even if that's the theoretical equilibrium of the game they are playing.[1]

 

  1. ^

    This obviously doesn't mean that they are ruthlessly optimizing for the company's true goals though. They are just kind of casually doing things they think are good for the business, because playing politics is too much work.

Curious for more details on the size of the companies, how many layers of management there were and what the managers' selection process was. 

I spent about a decade at a company that grew from 3,000 to 10,000 people; I would guess the layers of management were roughly the logarithm in base 7 of the number of people. Manager selection was honestly kind of a disorganized process, but it was basically: impress your direct manager enough that they suggest you for management, then impress your division manager enough that they sign off on this suggestion.

I'm currently somewhere much smaller, I report to the top layer and have two layers below me. Process is roughly the same.

I realized that I should have said that I found your spotify example the most compelling: the problems I see/saw are less "manager screws over the business to personally advance" but rather "helping the business would require manager to take a personal hit, and they didn't want to do that."

Nod. I maybe want to distinguish between "the organization goodharts itself as it grows, via a recursive hiring/promoting/enculturation mechanism that is strong that the size/bureaucracy would naively predict", vs "the organization is increasingly run by sociopaths" (via that same mechanism). It sounds like you're saying your experience didn't really match either hypothesis, but checking in about the distinction.

Yeah that's correct on both counts (that does seem like an important distinction, and neither really match my experience, though the former is more similar).

There are some mainstream-ish ideas out there like Servant Leadership or Agile that try to work around this problem by dis-entangling power and prestige. If the lower level managers are organizationally "in charge" of what actually gets done (leaving the higher levels just in charge of who to hire/fire), that breaks up some of the feedback.

Another thing to do is ensure that Management is not the only (and hopefully not even the best) way to gain prestige in the company. Include the top widget makers in the high level corporate meetings. They might not care about strategic direction, but if the CEO's golf foursome includes the senior production employees in the rotation alongside upper management (and the compensation packages are similar between them), that can help.

[-]Viliam3111

If you try to introduce Agile in a company that is already a deep Maze, expect to get it redefined to a version that has only superficial similarity with the original idea, and is Maze-compatible.

Basically, why most developers hate Scrum. Read the Scrum Guide, then underline the parts that your company does differently: Everyday meetings? Yes. Meetings under 5 minutes? No. Sprints? Yes, but they are two weeks instead of the recommended three. Retrospective? No. Scrum Master? There is a guy called that, but actually he is a manager. Product Owner? There is a guy called that, but actually he is a manager. Definition of Done? Not really, but there are JIRA tickets. Autonomy? Not really. Sustainable pace? Haha, your velocity is constantly measured, and it better keep increasing.

If you look at that list, it can be summarized as: remove feedback, increase paperwork, introduce metrics that can be Goodharted. While calling it Agile, so you can check another item in the list of buzzwords.

The thing I don't get is - if that's right, why is stuff so good? I feel like my life is generally enriched by high-quality products made by big corporations. Not only is the stuff good, but it's improving over time - surely that can't just be because of inertia, right?

Examples of good stuff made by big corporations where I think quality has improved over time:

  • laptops
  • smartphones
  • pens
  • jackets
  • snacks (they are at least good at being delicious)
  • beverages (e.g. kombucha, fizzy water)
  • medicines, I think?

Counterpoint to “the stuff is good and it’s improving all the time”. (More.)

Are those things that good? I don't feel like I notice a huge quality of life difference from the pens I used ten years ago versus the pens I use now. Same with laptops and smartphones (although I care unusually little about that kind of thing so maybe I'm just not tracking it). Medicines have definitely improved although it seems worth noting that practically everyone I know has some terrible health problem they can't fix and we all still die. 

I feel like pushing the envelope on feature improvements is way easier than pushing the envelope on fundamental progress, and progress on the former seems compatible, to me, with pretty broken institutions. In some respects, small feature improvements is what you'd expect from middle manager hell, kind of like the lowest common denominator of a legible signal that you're doing something. It's true that these companies probably wouldn't exist if they were all around terrible. But imo it's more that they become pretty myopic and lame relative to what they could be. I think academia has related problems, too. 

To be honest I actually do use the same pens as I used 10 years ago. Laptops have faster processors at least, and I can now do more stuff with them than I used to be able to. I don't have a terrible health problem I can't fix and haven't died yet.

I totally believe that we're doing way worse than we could be and management practices are somehow to blame, just because the world isn't optimal and management is high-leverage. But in this model, I don't understand how you even get sustained improvements.

I agree that's confusing. Some thoughts. 

  1. Maybe the whole moral mazes theory is false and this is just BS.
  2. When you have a reasonable feedback loop, the costs imposed here add up to a "significant diminishing marginal return", but maybe don't actually turn negative. "Big organizations have diminishing returns to more labor" was already part of my model before Moral Mazes, and Moral Mazes just happen to be one of the mechanisms.
  3. Hypothesis 1: innovation mostly happens in startups, and then they eventually get bought or copied by big corporations. (I think this hypothesis is actually false in many ways – my understanding is that serious research R&D type innovation tends to happen in huge companies like IBM, Google, etc, and startups do something more like "combining obvious ideas in simple ways." But, still seems at least somewhat relevant)
    1. evidence for Hypothesis 1: DeepMind was created originally as a small lab, that was later bought by google. 
    2. evidence against: Google seems to have other reseearch departments doing at least somewhat comparable research (but, I don't actually know offhand where Google Brain came from, maybe it was also a small lab that got bought?)
  4. Hypothesis 2: The innovation we see in big companies comes from R&D departments that are somehow particularly-sheltered from goodhart dynamics. (Maybe a big company has lots of random pockets of organization within it, which vary in maziness, there's just some selection effects on where innovation happens)
  5. My understanding is that Amazon-in-particular has a fairly parallelized structure, which seems less vulnerable to hierarchical-goodhart dynamics. Someone in the comments here noted that Walmart might be reasonable maze resistant by "lots of the employees work in a place where the object level work is right in front of them", which also seems reasonably parallelized. (Amazon and Walmart are the two largest companies by employee-count.
  6. Some progress comes from incremental engineering. Some comes from deeper research. When I about my day-to-day working at LessWrong, there is just a lot of obvious stuff to work on / improve. 
  7. Experiment to run: Check for the highest rated progress (on Amazon? On some kind of consumer reports? On wirecutter?). See which companies produced the best product. See how big the companies that produce the best products are. (I think operationalizing the prediction here is a bit tricky, requires getting good data, dealing with some confounders and stuff. Will think on it)
  8.  

Google Brain was developed as part of X (Google's "moonshot factory"), which is their way of trying to create startups/startup culture within a large corporation. So was Waymo. 

One of the earliest records of a hierarchical organization comes from the Bible (Exodus 18). Basically, Moses starts out completely "in touch with reality," judging all disputes among the Israelites from minor to severe, from dawn until dusk. His father in law, Jethro, notices that he is getting burnt out, so he gives him some advice on dividing up the load:

You will surely wear yourself out, as well as these people who are with you, because the task is too heavy for you. You cannot do it alone, by yourself. Now listen to my voice—I will give you advice.... You should seek out capable men out of all the people—men who fear God, men of truth, who hate bribery. Appoint them to be rulers over thousands, hundreds, fifties and tens. Let them judge the people all the time. Then let every major case be brought to you, but every minor case they can judge for themselves. Make it easier for yourself, as they bear the burden with you.

It seems that in a system like this, all levels of the managerial (judicial) hierarchy stay in touch with reality. The only difference between management levels is that deeper levels require deeper wisdom and greater competence at assessing decisions at the "widget level" (or at least greater willingness to accept responsibility for bad decisions). I wonder if a similar strategy could help mitigate some of the failures you pointed out.

Relatedly, in deep learning, ResNets use linear skip connections to expose otherwise deeply hidden layers to the gradient signal (and to the input features) more directly. It tends to make training more stable and faster to converge while still taking advantage of the computational power of a hierarchical model. Obviously, this won't prevent Goodharting in an RL system, but I would say that it does help keep models more internally cooperative.

Interesting (although I do think "judge" is a fairly different job than "manager", so I'd expect fairly different dynamics.

Curated. I continue to find more concrete examples of this helpful, and more walkthroughs of how the maziness levels rise, and this post has both.

I briefly did some junior consulting work for a large organization with multiple departments, and made a few errors due to the (natural but wrong) assumption that surely everyone at the large organization was working towards the same overarching goal. Instead, departments, or at least department heads, were in intense competition with each other. So e.g. pointing out a flaw in a proposal by my boss / client, rather than getting interpreted as helpful truth-seeking, was instead seen as backstabbing. Nowadays I understand that better, but I originally found this to be a very confusing experience.

Note also the destruction (or maybe just lack; one can't destroy something that never existed) of ability to pursue a mission as well.  As individuals are more distant from decisions, and less able to actually optimize behaviors toward an outcome, more waste occurs in the form of misaligned beliefs and behaviors.

This is VERY visible in companies which have grown rapidly, especially those which had great customer service or products when they were smaller, and now seem to be more generic and profit-optimized.  Also (perhaps especially) governments.  Both local and national governments are wasteful and actively hostile to their constituents, despite what the elected people claim, and even despite what they legitimately want.

A personal anecdote. Many, many moons ago I started my research career at a large multinational organisation in a profitable steady business. I enjoyed the job, the perks were nice, I did the work and did well in the system. Some years later my group were asked to take a training course run by an external organisation. We were set a scenario “Imagine your company has only money for 6 months? What are you going to do about It?” We, cossetted in our big company mindset, thought the question hilarious and ludicrous.

 Fast forward a number of years, the company closed our site down and I went off and joined a start-up, Very soon we all found ourselves in exactly the scenario depicted in the training exercise. We managed to survive. I’ve worked in small/smallish organisations ever since. There have been ups and downs but on the whole I wouldn’t have changed anything. 

This is perhaps slightly tangential, though likely consequential to the Middle Manager Hell the OP describes. The big company environment made it easy for us to be complacent and comfortable, and hard for us to follow up the high risk high/profit ideas that might have made a big difference to the bottom line.

This was a long while ago and since then at least some big companies have tried various initiatives to change  this kind of mindset. So perhaps things have changed in some large multinationals. Can anyone else comment?

I compressed this article for myself while reading it, by copying bits of text and highlighting parts with colors, and I uploaded screenshots of the result in case it's useful to anyone.

https://imgur.com/a/QJEtrsF 

Neat. That's an interest technique, thanks for sharing.

[-]Ben4-1

The "levels of mazedom" is an interesting insight. The cleanest examples I can see are in politics:

Level 1: "Policy X is good. I will try and persuade people to implement policy X, or get elected to do it."

Level 2: "I want to get elected. Policy Y is seen as good. I will advocate policy Y and implement it."

Level 3: "I want to get elected. My supporters don't actually think that policy Z is good or even possible, but advocating for it sends a signal that will help me win." [eg. '... and Mexico will pay for it.']

Level 4: "I want to get elected. I will set a theme - a feeling for my campaign. I will avoid specifics of actual policies to avoid alienating people who might disagree with them."

These seem more like (good) examples of Simulacrum Levels rather than mazedom levels.

[-]Ben10

You are right, probably closer to simulacrum. Although the two do overlap.

When the number of layers grows, the only thing that really works is metrics that cannot be goodhearted. Whenever those metrics exist, money becomes a perfectly good expression of success.

It might work to completely prohibit more than one layer of middle management. Instead, when middle manager Bob wants more people, he and his boss Alice come up with a contract that can't be gamed too much. Alice spins out Bob's org subtree into a new organization, and then it becomes Bob's job to buy the service from the new org as necessary. Alice also publishes the contract, so that entrepreneurs can swoop in and offer a better/cheaper service.

This sounds similar to Amazon's API mandate:

1. All teams will henceforth expose their data and functionality through service interfaces.
2. Teams must communicate with each other through these interfaces.
3. There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
4. It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter.
5. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
6. Anyone who doesn’t do this will be fired.
7. Thank you; have a nice day!

"Do stuff that seems legibly valuable" becomes the main currency, rather than "do stuff that is actually valuable."

 

In my experience, these are aligned quite often, and a good organization/team/manager's job is keeping them aligned. This involves lots of culture-building, vigilance around goodharting, and recurring check-ins and reevaluations to make sure the layer under you is properly aligned. Some of the most effective things I've noticed are rituals like OKR planning and wide-review councils, and having a performance evaluation culture that tries to uphold objective standards.

And of course, the level after this you describe, where people are basically pretending to work, is something effective organizations try to weed out ruthlessly. I'm sure it exists, but competitive pressures are against it persisting for long.

My direct experience is as an increasingly-senior software engineer at Google. It leads me to be much more optimistic about corporations' abilities to be effective than this article. I truly believe that the things have been set up such that that what gets me and my teammates good ratings or promotions, are actually valuable.

I just added this to the bottom of the post:

A thing on my mind is, I expect a lot of people to have taken at least a brief look at these arguments, and been like "I dunno, maybe, but scaling organizations still seems really useful/important, and I don't know that I buy the effects here are strong enough to outweigh that."

And... that's super fair! The arguments here are pretty abstract and handwavy. I think the arguments here are good enough to promote this as a serious hypothesis. But I think it's kinda reasonable for most people's actual guesses about the world to be informed more by their broader experience of what orgs tend to be like.

I think it'd be fair ask "okay, cool, but can you go do some real empirical work here to see how reliably Moral Maze problems tend to come up, and how strong the effect size is?". I think this is maybe a thing worth putting some serious research time on. But, in order for that to be useful, there needs to be a real person with some real cruxes, and the data-gathering needs to actually address those cruxes. 

So, if you are someone running a company, or hiring, and you could be persuaded of the Recursive Middle Manager Hell Hypotheses but want to see some kind of data... I'm interested in what sort of evidence you'd actually find compelling.

I have listened to this essay about 3 times and I imagine I might do so again. Has been a valuable addition to my thinking about whether people have contact with reality and what their social goals might be. 

Upvoted. Though as someone who has been in the corporate world for close to a decade, this is probably one of the rare LW posts that I didn't learn anything new from. And because every point is so absolutely true and extremely common in my experience, when reading the post I was just wondering the whole time how this is even news.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

If it does successfully pivot, it'll probably be executed through a small department that works more independently, in spite of company culture.

What are the benefits of more layers in the hierarchy?

You get to hire more people and do more stuff (without people having no idea what to work on because there was no one with time to tell them or train them)

Typos: (feel free to delete this comment)

  • "The third generation is takes it a step further." -> generation takes it
  • "some kind of phases" -> kinds of?
  • "Say you're a CEO, or otherwise in company leadership trying to ensure your company can communicate clearly, focus on producing object-level value, etc. It's much easier to stop the culture from progressing down this path" -> [the "path" refers to increasing maze levels, but with the preceding sentence might instead refer to the object level]
  • "There's many more details here" -> There are
  • "I think the current phase of most" -> phases of?
  • "I'll maybe end by summarizing the highlights Protecting Large Projects" -> I'll end; the highlights from