Previously: Short Termism and Quotes from Moral Mazes

Epistemic Status: Long term

My list of quotes from moral mazes has a section of twenty devoted to short term thinking. It fits with, and gives internal gears and color to, my previous understanding of of the problem of short termism.

Much of what we think of as a Short Term vs. Long Term issue is actually an adversarial Goodhart’s Law problem, or a legibility vs. illegibility problem, at the object level, that then becomes a short vs. long term issue at higher levels. When a manager milks a plant (see quotes 72, 73, 78 and 79) they are not primarily trading long term assets for short term assets. Rather, they are trading unmeasured assets for measured assets (see 67 and 69).

This is why you can have companies like Amazon, Uber or Tesla get high valuations. They hit legible short-term metrics that represent long-term growth. A start-up gets rewarded for their own sort of legible short-term indicators of progress and success, and of the quality of team and therefore potential for future success. Whereas other companies, that are not based on growth, report huge pressure to hit profit numbers.

The overwhelming object level pressure towards legible short-term success, whatever that means in context, comes from being judged in the short term on one’s success, and having that judgment being more important than object-level long term success.

The easiest way for this to be true is not to care about object-level long term success. If you’re gone before the long term, and no one traces the long term back to you, why do you care what happens? That is exactly the situation the managers face in Moral Mazes (see 64, 65, 70, 71, 74 and 83, and for a non-manager very clean example see 77). In particular:

74. We’re judged on the short-term because everybody changes their jobs so frequently.

And:

64. The ideal situation, of course, is to end up in a position where one can fire one’s successors for one’s own previous mistakes.

Almost as good as having a designated scapegoat is to have already sold the company or found employment elsewhere, rendering your problems someone else’s problems.

The other way to not care is for the short-term evaluation of one’s success or failure to impact long-term success. If not hitting a short-term number gets you fired, or prevents your company from getting acceptable terms on financing or gets you bought out, then the long term will get neglected. The net present value payoff for looking good, which can then be reinvested, makes it look like by far the best long term investment around.

Thus we have this problem at every level of management except the top. But for the top to actually be the top, it needs to not be answering to the stock market or capital markets, or otherwise care what others think – even without explicit verdicts, this can be as hard to root out as needing the perception of a bright future to attract and keep quality employees and keep up morale. So we almost always have it at the top as well. Each level is distorting things for the level above, and pushing these distorted priorities down to get to the next move in a giant game of adversarial telephone (see section A of quotes for how hierarchy works).

This results in a corporation that acts in various short-term ways, some of which make sense for it, some of which are the result of internal conflicts.

Why isn’t this out-competed? Why don’t the corporations that do less of this drive the ones that do more of it out of the market?

On the level of corporations doing this direct from the top, often these actions are a response to the incentives the corporation faces. In those cases, there is no reason to expect such actions to be out-competed.

In other cases, the incentives of the CEO and top management are twisted but the corporation’s incentives are not. One would certainly expect those corporations that avoid this to do better. But these mismatches are the natural consequence of putting someone in charge who does not permanently own the company. Thus, dual class share structures becoming popular to restore skin in the correct game. Some of the lower-down issues can be made less bad by removing the ones at the top, but the problem does not go away, and what sources I have inside major tech companies including Google match this model.

There is also the tendency of these dynamics to arise over time. Those who play the power game tend to outperform those who do not play it barring constant vigilance and a willingness to sacrifice. As those players outperform, they cause other power players to outperform more, because they prefer and favor such other players, and favor rules that favor such players. This is especially powerful for anyone below them in the hierarchy. An infected CEO, who can install their own people, can quickly be game over on its own, and outside CEOs are brought in often.

Thus, even if the system causes the corporation to underperform, it still spreads, like a meme that infects the host, causing the host to prioritize spreading the meme, while reducing reproductive fitness. The bigger the organization, the harder it is to remain uninfected. Being able to be temporarily less burdened by such issues is one of the big advantages new entrants have.

One could even say that yes, they do get wiped out by this, but it’s not that fast, because it takes a while for this to rise to the level of a primary determining factor in outcomes. And there are bigger things to worry about. It’s short termism, so that isn’t too surprising.

A big pressure that causes these infections is that business is constantly under siege and forced to engage in public relations (see quotes sections L and M) and is constantly facing Asymmetric Justice and the Copenhagen Interpretation of Ethics. This puts tremendous pressure on corporations to tell different stories to different audiences, to avoid creating records, and otherwise engage in the types of behavior that will be comfortable to the infected and uncomfortable to the uninfected.

Another explanation is that those who are infected don’t only reward each other within a corporation. They also do business with and cooperate with the infected elsewhere. Infected people are comfortable with others who are infected, and uncomfortable with those not infected, because if the time comes to play ball, they might refuse. So those who refuse to play by these rules do better at object-level tasks, but face alliances and hostile action from all sides, including capital markets, competitors and government, all of which are, to varying degrees, infected.

I am likely missing additional mechanisms, either because I don’t know about them or forgot to mention them, but I consider what I see here sufficient. I am no longer confused about short termism.

 

 

New Comment
21 comments, sorted by Click to highlight new comments since:

Thanks, this post (plus the previous one) also helped clarify some gears for me about how institutions work and why they seem to suffer particular pathologies.

I kind of like the term 'infected' here, it is nicely compatible with the term 'werewolf' when appropriate (you can be infected with lycanthropy), but I think might work a bit more robustly in more types of conversations.

Yes, I think "infected" is a lot closer. There's a closely related term of ingroup jargon I'm holding onto and not sharing publicly that's closely related, because I very much don't want to see that particular word I've spoken twisted by knaves to make a trap for fools (or at least I want to until I can give it a really good try at explaining clearly), but overall "infected by lycanthropy" and "zombie plague" seem like better paradigms than calling whole persons werewolves.

I can see most of these points, but I have to question:

Infected people are comfortable with others who are infected, and uncomfortable with those not infected

This sounds like an unrealistic "honour amongst thieves" kind of thing, or having "infection" as a single coordinated entity, and I don't follow the logic. To use another example, if I was pathologically dishonest, I would prefer doing business with honest people rather than others like me. I'd certainly prefer honest dedicated subordinates to scheming backstabbing ones.

On game theory your argument doesn't seem to makes much sense; maybe there's a psychological version that works?

[-]Zvi180

Definitely no honor among them thieves. They can and do betray each other all the time. But common interests, and ease of understanding. I think it's hard for people like us to get inside this other mindset properly.

Dishonest others will tend to reinforce dishonesty-rewarding norms (and other thieves will tend to do things that make stealing a better idea). They will be easier for other dishonest people to understand, and thus this will yield comparative advantage dealing with them. Most importantly, they will 'play ball.' If you deal with an honest person, they will hold you to standards of honesty, and value being honest and you being honest over what is locally expedient. You don't want that. You definitely don't want that for a coworker.

You prefer an honest underling because you intend to act in an honest way yourself. For a dishonest person, an honest underling won't be loyal or trustworthy, and is likely to refuse to play ball with what you want. They'll want explicit orders, they won't understand what you need, they'll have moral objections, they'll tell the truth when asked by others, and so on. You want obedience and loyalty. Yes, it's frustrating that given the opportunity most managers will stab you in the back, but they think of this as hate the player, not the game. Besides, if they weren't willing to do this, they wouldn't be willing to backstab others, so they wouldn't be a good ally.

Also, if you don't care about object-level accomplishment, then honesty loses a lot of its edges.

In terms of doing business with them, a dishonest person to work with will help you advance your agenda at the expense of your coworkers and corporation (and the public), will be fine with and even help with deceptive practices, and understand your needs better, allowing you to get what you really want while avoiding being explicit. You also get a comparative advantage, since dissimilar actors won't know how to handle the situation.

Comfortable is a term of art, here, of a sort - it means roughly that one is confident this person can exhibit basic competence and understanding, and will do what is expedient, in ways that are unlikely to get you scapegoated. Or that the situation won't lead to same, as you have a plan to avoid this. You're uncomfortable when you worry this person or situation will cause you to become scapegoated.

Do you have evidence for this? I can tell a good story where this is true, but also one where the opposite is true (especially because most "infected" people will see themselves as heroes, not infected).

To pick another example than honesty, we could see flexibility in decision making. Most people want maximum flexibility for themselves, and for their underlings and colleagues to be rigidly obedient. Again, another example where those who possess trait X don't want people around them, to also have X. Dominant people want others to be more submissive, greedy people want others to be more generous, and so on. There are examples going the other way, but I think we need data here.

[-]Zvi160

As Benquo notes I think the detailed anecdotes are good evidence, and it matches my experiences in business and what I know of other business. But of course, no one has successfully 'run a study' of the question, nor would I expect such attempts at such a study to get to the heart of the question effectively.

Agreed there are traits X where people with X tend to want those around them to have less X or contrasting trait Y, the most amusing one (in many important contexts, but in far from all contexts) being chromosomes where they're literally X and Y.

Your examples show things I'm not clear about. People do want sympathetic others around, even for 'bad' traits, and often view those sharing those traits as 'their kind of people,' and often as 'winners.'

I'm not sure if dominant people typically want others to be more dominant or more submissive. Certainly they want specific others to be submissive so they can dominate them, but they tend to feel kinship and friendship with, and form alliances with, other dominants, and generally promote the idea of both domination whether or not they also promote submission, in my experience/observation. I believe they tend to strongly favor people who think that dominants should control submissive, whether that person is dominant or submissive themselves, over those who think everyone should be equal.

Greedy people want to succeed in their greed, so they want those they are directly competing with or asking for things to be generous and less greedy, but this gets murky fast with other relationships. Greedy people tend to speak the virtues of greed at least to their friends and allies, telling them to be more greedy when dealing with others, and see those exhibiting greed as good - see capitalists supporting greedy competition, or traders such as Gordon Gecko, who says literally "Greed is good," but also many others. This should not be confused with wanting to experience generous acts in direct exchanges. Think of it this way. If you were greedy and your rival was generous, who would you want picking between you, a greedy person or a generous person? What if you were generous and your rival greedy?

I hope that helps share my intuitions a bit more?

Agreed there are traits X where people with X tend to want those around them to have less X or contrasting trait Y, the most amusing one (in many important contexts, but in far from all contexts) being chromosomes where they're literally X and Y.

Ha! ^_^

As Benquo notes I think the detailed anecdotes are good evidence, and it matches my experiences in business and what I know of other business. I hope that helps share my intuitions a bit more?

I have a cleared understanding of your reasoning. And your personal experience, plus the anecdotes, and enough to cross the first two bars - the phenomena you describe certainly exist, and are not extremely rare.

The problem is the next step: how frequent are these phenomena, and how severe are they? Because there are counter-examples and counter-narratives (Benquo even called them "official" narratives). Once we admit they also exist, and are not extremely rare, then we're reaching the limit of what we can get from personal experience and anecdotes (at best we can estimate how prevalent the various behaviours are in our own subcultures).

But of course, no one has successfully 'run a study' of the question, nor would I expect such attempts at such a study to get to the heart of the question effectively.

We can make some predictions from your intuitions (eg people with low empathy will have friends with low empathy, narcissists will hang around with other narcissists, etc...) and measure that as best we can. This won't be proof, but it will be an indication, and will get us partway towards measuring the prevalence of the various behaviours.

Being obedient is not always the same as acting in the interests of the company.

If you are a corrupt cop at the head of a police department, you want people lower down on the chain also be corrupt. You want to be able to buy their allegiance by providing the right incentives.

A boss who wants to hit his short term metrics wants directs who also care about the short term metrics that the boss can control and not directs who optimize for long term success of the company.

More to this: you want underlings who are obedient to you personally (since you're their boss), not rigidly obedient. You might want your underlings' underlings to be rigidly obedient, so that you can better control (and be informed of) things a few layers down, and structural features may favor one of the other of these incentives.

The anecdotes in moral mazes are pretty clear empirical evidence for this. Lots of examples of people with integrity just getting shut out of the process or finding themselves inexorably drawn into conflict with people whose narratives are based on loyalty and situational advantage.

I think the subtext here where the concepts of “evidence” and “data” are gerrymandered to privilege uniform quantified data of the kind one can make statistical inference from is pretty unfortunate; it increases the already substantial evidential burden levied on any challenge to the official narrative of how people should be and implicitly denies the existence of nonstandardized attributes as superstitions like ghosts. This is bad epistemology.

The right approach here is to try to generalize the examples (which establish existence even if they don’t establish prevalence) with an analytic model, and test the model against reality to see if you can predict the world you observe with relatively high likelihood.

If your theory is narrowly construed as "people around me have features F", then your own personal experience is evidence for the theory.

But to go beyond that, we really need something closer to "uniform quantified data" (though there is a place for intermediate quality data, like case reports). Personal experience is subject to confirmation bias; much more importantly, any single human has only met a tiny sliver of the human population, most of them in one or two sub-cultures, and most of them in specific circumstances that covers only a tiny part of the full life of the persons met. Quantified data is really at a whole higher level of quality; it starts making judgements about the whole population, judgements we couldn't get in any other way.

it increases the already substantial evidential burden levied on any challenge to the official narrative of how people should be

I agree that there should be a substantial evidential burden for a new theory of human interactions; but there should also be a substantial evidential burden for "official narratives" as well. I find Zvi's thesis (that these kind of "evil like attracts like" are very prevalent) plausible but unproven. I find the opposite thesis (that these behaviour are pretty rare) also plausible but unproven. Sometimes, we just don't have enough evidence... yet.

One thing we can do is take a narrative, and deduce from it something that is measurable, and measure that. That isn't proof or disproof by any means, but it's useful evidence (of course, the order of operations matters: we need to deduce something that wasn't directly used in forming the narrative in the first place).

If your theory is narrowly construed as "people around me have features F", then your own personal experience is evidence for the theory.

You can observe not only whether people have those features, but whether they seem to be doing things that cause other people with those features to be included more centrally in rent-collection coalitions and people without such features to be marginalized.

You can then look and see whether, for instance, there are any widespread depictions of corporate life that don't substantively agree with the Dilbert / Moral Mazes depiction. You can think about stories you've heard from others, and get a sense for how often their experience agrees with it and how often, like Dagon's, it disagrees (and whether there are any regular patterns there).

Something like survey data might be helpful if you designed a fantastically good survey, but you can make inferences without it, as indeed we have to do for nearly all the ways we make judgments to navigate our lives.

I don't think it's an honor or trust thing, but simply a communication ease. The "infection" is memetic, at a world-view level. People who believe they're doing the right thing (or at least the universal thing) by pursuing shorter-term visible successes at the potential expense of longer term positive impact find they're in disagreement in ways they don't understand with those who are sacrificing short-term gains for uncertain long-term goals.

It's not a matter of pathologically dishonest (well, any more than any other value system or religion is dishonest), it's a matter of misalignment in beliefs about what's important, and different judgement about tradeoffs in measurable vs unmeasurable goods.

This seems close but not quite right - Moral Mazes describes actors, who really do have a very different relationship to truth from scribes.

I think "close but not quite right" is likely to be a ceiling for map coverage when talking about complicated individuals in more complicated group activities. The world-modeling and goal-seeking divergence between the infected (including myself, on some topics) and the enlightened (I'll ignore the bystanders for now) is pretty significant.

And, of course, it's a continuum, which shifts across contexts and across time even for an individual, so any generalization will be wrong sometimes. That's true of the actor/scribe dimension as well - I generally think of myself as a scribe in my internal narrative, and in select individual and very-small-group conversations, but an actor in most work and larger social contexts.

LessWrong is an interesting case. Posts and comments are definitely acts, but the goal of the act is improved truth-knowing (and -telling), which is a fun hybrid of the two styles.

To use another example, if I was pathologically dishonest, I would prefer doing business with honest people rather than others like me. I'd certainly prefer honest dedicated subordinates to scheming backstabbing ones.

I'm not sure if the metaphor fits (and this is fictional evidence), but Ocean's Eleven (or whatever number it is now) is about a team of thieves. They all have an eye on the common goal. It would not make sense for them to recruit non-thieves to the team because a "honest law abiding person" might turn them into the police.

When they're doing business (robbing people) they prefer non-thieves, who are easier to rob. But when they're putting together a team for a heist, they prefer thieves. (This seems to make sense game theory wise.)

It sounds to me like you are claiming that infected and not infected are binary or bimodal states. Is that the case?

[-]Zvi50

Ah, I was worried about giving that impression, but wasn't able to find an elegant way to avoid it. I don't think this is right, although it's not that wrong, either - I think there is a large cluster of people/things that are effectively non-infected, although there are important differences, especially in how they interact with the infected and how vulnerable they are to infection. Once you are infected, there are definitely levels, which can be thought of as similar to the simulacra. One might, for example, be fine with and support (be infected by) level-3 behavior that pretends to have some reflection of reality but not be comfortable, yet, with level-4 behavior that does not so pretend.

Maybe this is just nitpicking, but I found this passage confusing:

On the level of corporations doing this direct from the top, often these actions are a response to the incentives the corporation faces. In those cases, there is no reason to expect such actions to be out-competed.
In other cases, the incentives of the CEO and top management are twisted but the corporation’s incentives are not. One would certainly expect those corporations that avoid this to do better. But these mismatches are the natural consequence of putting someone in charge who does not permanently own the company. Thus, dual class share structures becoming popular to restore skin in the correct game.

Is that last sentence in the wrong paragraph?

Before I get to my reading that suggests that change, a couple things that I found confusing that could have caused me to misread it: What does "in charge" mean? The CEO or the owners? How do corporations face incentives, rather than individuals?

I read the first paragraph about Wall Street and quarterly reports giving bad incentives to the CEO. Dual class shares exist to protect founders from outside investors. So doesn't that sentence belong in the first paragraph, not the second...? (It's a little weird to say that outside investors don't have skin in the game. Maybe you mean that money managers investing other people's money don't have skin in the game. Also, it's more that super shares remove control, rather than add skin. I think that it may be more important that the founders have demonstrated ability to make long-term plans than their incentives. Consider Steve Jobs returning to Apple as a hired-gun CEO.)

What is the other case, the case of the second paragraph? When Wall Street or other owners understands the company, but has to install a hired manager? Then, yes, the misalignment of incentives makes it difficult for the board to hire a CEO. But super shares don't seem relevant to that problem.

I think, of the Moral Mazes posts from 2019, this is perhaps the most standalone in terms of communicating a major problem, with enough associated gears to think about.