A causal chain, of course.

There’s a very common thing that humans do: a person makes an observation about something they dislike, so they go ahead and make an effort to change that thing. Sometimes it works, and sometimes it doesn’t. If it doesn’t work, there can be a variety of reasons for that – maybe the thing is very difficult to change, maybe the person lacks the specific skills to change the thing, maybe it depends on the behavior of other people and the person is not successful in convincing them to act differently. But there’s also one failure mode which, while overlapping with the previous ones, is worthy to highlight: imagine the thing the person dislikes is the outcome of a reasonably complex process. The person observes primarily this outcome, but is partially or fully ignorant of the underlying process that causes the outcome. And they now desperately want the outcome to be different. In such a situation they are practically doomed to fail – in all likelihood, their attempts to change the outcome will not be successful, and even if they are, the underlying cause is still present and will keep pushing in the direction of the undesired outcome.

 

A decent image for applying force to the wrong end of the causal chain might be trying to pull off a screwed-on cap from a bottle directly to the top. The cap moving up is indeed the outcome you may be hoping for. But the (initially non-obvious) way to get there is not to apply force directly in the direction of the intended outcome, but in some other direction, in this case a rotation, that happens to lead to the outcome.

Three Examples

Productivity in a Company

A software company I worked for once struggled with a slow development cycle, chronic issues with unmet deadlines, and generally shipping things too slowly. The leadership's primary way of addressing this was to repeatedly tell the workforce to “work faster, be more productive, ship things more quickly”.

In principle, this approach can work, and to some degree it probably did speed things up. It just requires that the people you’re pushing have enough agency, willingness and understanding to take it a step further and take the trip down the causal chain, to figure out what actually needs to happen in order to achieve the desired outcome. But if middle management just forwards the demand to “ship things more quickly” as is, and the employees below them don’t have enough ownership to transform that demand into something more useful, then probably nothing good will happen. The changed incentives might cause workers to burn themselves out, to cut corners that really shouldn’t be cut, to neglect safety or test coverage, to set lower standards for documentation or code quality – aspects that are important for stable long term success, but take time to get right.

To name one very concrete example of the suboptimal consequences this had: The company had sent me a new laptop to replace my old one, which would speed up my productivity quite a bit. But it would have taken a full work day or two to set the new laptop up. The “we need to be faster” situation caused me to constantly have more pressing things to work on, meaning the new, faster laptop sat at the side of my desk, unused, for half a year. Needless to say, on top of all that, this time was also highly stressful for me and played a big role in me ultimately leaving the company.

Software development, particularly when multiple interdependent teams are involved, is a complex process. The “just ship things more quickly” view however seems to naively suggest that the problem is simply that workers take too long pressing the “ship” button.

What would have been a better approach? It’s of course easy to armchair-philosophize my way to a supposedly better solution now. And it’s also a bit of a cop-out to make the meta comment that “you need to understand the underlying causal web that causes the company’s low velocity”. However, in cases like this one, I think one simple improvement is to make an effort for nuanced communication, making clear that it’s not (necessarily) about just “working faster”, but rather asking everyone to keep their eyes open for causes of inefficiency, and to question any processes and rituals from the past that may have outlived their usefulness.

And it might even be the case that taking more time for certain things, such as careful planning and prioritization, ends up saving a lot of time down the line.

Diversity of a Community

Some communities struggle with a lack of diversity. The rationality and effective altruism communities are two examples, but this is certainly a pretty universal issue: whether a community revolves around physical activities, spirituality, expertise, doing good or anything else, the community’s focus as well as its general vibes naturally attract different types of people, and this almost inevitably also has some systematic effect on the criteria that relate to “diversity”, such as gender, ethnicity or educational background. If these effects are very strong, you end up with a community that, for instance, consists of mostly nerdy young men.

So let’s say you have identified this as a problem, maybe because it limits not only the diversity of people but also the diversity of views, and hence makes your community vulnerable to overlooking important problems, arguments and considerations. What can you do? The simplest approach would be to shout from the rooftops “we need more diversity!”. But if you’re unlucky, this could e.g. cause people in the community to bombard “diverse” individuals with attention and to put pressure on them to stick around. Which might just have the opposite effect of what you wanted. Somebody becoming part of a community is a process that involves an interplay of many different factors. So addressing it close to the outcome is not a very promising approach.

To come up with some better ideas of how to approach this, let’s look at an exemplary causal diagram about what may go into whether any given person will join a community:
 

One possible causal diagram of the process of a single person joining a new community. As a general heuristic, points higher up in the chain are generally more susceptible to “direct attacks”. E.g. it’s pretty straightforward to make a community easier to find, or to adjust one’s outreach, or to ask your community members to bring friends to meetups, and there’s little risks of accidentally achieving the opposite of what you want. Whereas for elements further down in the chain, such as becoming part of a community, applying force to these things directly may often backfire. Additionally, breaking the process down like this can also help identify current bottlenecks and issues.

 

One better candidate for improving diversity is to look at the opposite end of the causal chain(s), where we find many promising opportunities for improvement. One somewhat obvious example is outreach. Maybe the community should adjust its outreach strategy, and focus more on “more diverse” audiences? This probably would increase diversity somewhat, but it likely also makes your outreach much less effective, because the base rate of people interested in your community may be much lower amongst the new audience you decided to focus on. So while this particular approach may indeed be helpful for diversity, it comes at a cost, and maybe one that you don’t actually have to pay.

Clearly there are many points of attack, and it requires a thorough understanding of the underlying processes to find an intervention that both works and has few or no disadvantages. For any given community, the concrete bottlenecks may of course vary. One approach that seems promising to me based on my experiences in the EA & rationality communities would be to work on welcoming-ness. Dial up friendliness, openness, general interest in new people, and make an effort to understand what may be off-putting to new individuals. To name one example, establish a culture of not talking over others. Make sure shy new people get many chances to speak, share their views and ask questions, without having to be afraid of others carelessly interrupting them.

Interactions in a Zoom Call

I once attended a talk on Zoom with several 100 participants. Part of the session were a few short exercises, where people should come up with answers to certain prompts, and write them into the Zoom chat. After the first such exercise, the host asked “Does anyone want to share their findings with the group? Feel free to just unmute yourself”, and when after 10 seconds of silence nobody did so, the host gave up on it and continued with the talk.

After the second such exercise, the host decided to push a bit harder, and said “and it would be really great if somebody could share their answer this time, so please don’t be shy”, which still didn’t work out (...well, to be perfectly honest I’m not sure anymore whether it worked out; at that point, after another few seconds of silence, I thought “oh this would be a good example for this post I’m planning to write”, so I wrote this down instead of paying attention to Zoom).

In all likelihood, the reason nobody spoke out the first time was not that they weren’t aware that the host would have liked them to do so. Of course this is still a factor, and pointing out that you really would like somebody to share their answers might indeed move someone in the audience just over the threshold of speaking up. But this is unlikely to be the most effective strategy. Let’s look at some of the reasons for why people may be hesitant to unmute themselves in such a situation:

  1. Too shy to speak in front of a few hundred strangers
  2. Scared of the awkwardness that ensues when two people start speaking at the same time
  3. Don’t think their answer is particularly insightful or share-worthy
  4. General laziness and/or bystander effect; in such a large call, surely someone else will be in a better position to answer

If you take these causes into account, you can easily find some other approaches that might work better:

  • Maybe ask one of the people who posted their response in the Zoom chat directly whether they’d like to quickly share this verbally; this would resolve points 2-4 above, even though there’s some risk you might put a person on the spot who really doesn’t feel comfortable to speak
  • Maybe you know some of the people in the audience personally and know they probably wouldn’t mind to speak, so you can use them as a fallback in case nobody else takes initiative

What Causes This Pattern?

I can think of a few reasons for why this happens pretty often:

  • lack of awareness of the causal complexity behind people’s observation, and they actually think that applying force directly to the observed outcome is what needs to happen in order to change it.
  • Misaligned incentives cause people to prioritize signaling over actually changing the outcomes: when you want to signal that you’re doing something to change a certain outcome, then it’s likely helpful for your effort to be as close to the outcome as possible, so as to make the signal more salient.
  • While it’s somewhat of a mixture of the previous two points, there’s also some potential for miscommunication. E.g. in the example of the unproductive company, maybe the CEO, when asking the workforce to “ship things more quickly”, is perfectly aware of the complex causality, and simply assumes that all the employees are clear about this as well. Whereas some employees just end up getting the message “hurry up”.

Isn’t this the Same as “Treating Symptoms instead of Root Causes”?

I was wondering about this for a while, but the two concepts still seemed pretty distinct to me. On the one hand, both “applying force to the wrong end of a causal chain” and “focusing too much on treating the symptoms of something rather than the root cause” suggest moving up the causal chain closer to the cause of some outcome. However, there are still a few crucial differences and general reasons to differentiate between the two:

  • “Treating symptoms vs root causes” is a concept many people already associate with specific areas such as medicine, politics or development aid, and they may not recognize it easily in very different contexts.
  • Furthermore, this existing concept appears to be loaded with the general understanding that “treating symptoms” is the wrong thing to do and should be avoided. But I think in many cases treating symptoms can be perfectly fine, as long as it is effective. If the symptoms are what you ultimately care about, then it’s an empirical question whether the best way to get rid of these symptoms is to address them directly, or instead attack some root cause.
  • “Treating symptoms” conveys this image of, well, actually reducing the symptoms. But applying force to the wrong end of a causal chain can deviate from that, and have no positive effect at all, and sometimes even negatively affect the outcomes you care about. If you shout from the rooftops “Increase diversity!”, and this causes women at rationality meetups to suddenly be swarmed by men who try to accommodate them, this may actually make it less rather than more likely for them to stick around. So this is not even a case of “treating symptoms” – it’s a case of force being applied in a misguided way, causing it to end up pushing in a wrong direction.

To make it short, I’d say that the symptoms vs root causes distinction is about (supposed[1]) shortsightedness, whereas the idea of applying force to the wrong end of the causal chain is more about ineffectiveness, and sometimes even strategies backfiring completely.

So What Can We Do?

This post’s intention is mostly to hint at this relative blindspot[2], given that it’s most likely generally helpful to be aware enough of the concept so as to recognize it when it occurs. This already enables us to not waste extra energy on a strategy that’s clearly far from optimal, and alleviates some of the frustration when our efforts don’t have the intended results.

A more specific case is that of groups or communities of people. Once you recognize some undesirable trait in a community, it’s generally pretty likely that this is the result of a complex system of interactions between humans. This not only makes it particularly important to gain an understanding of the causal relationships behind the observed outcomes. It is also more important to share your insights with others in that same community, because you likely can’t solve the problem on your own.

There may be cases where the people you talk to are actually in a better position than you to understand the underlying causal web. When this is the case, it can make sense to really just ask them to strive for a certain outcome, and leave the implementation details fully up to them. But even then it can be worth reassuring them that it’s still an open question what the best strategy is, and that you’re not going to judge them based on any signaling, but rather on their strategies and outcomes.

Awareness of where we apply our efforts can prevent wasted energy and frustration. Hence, I’d argue, it’s generally worth taking the time to understand the deeper causal links and communicate our findings with others, and to point out whenever we recognize cases of force being applied to the less tractable parts of causal chains. 

 

  1. ^

    I sometimes get the feeling that people are pretty quick to dismiss valid strategies as “treating only the symptoms”. E.g. certain EA organizations working in the global poverty sector are sometimes viewed in this light, even though the evidence is pretty mixed whether more systemic approaches are actually more effective in the long term. Additionally, I personally find it a bit distasteful to frame tragedies in our world that involve a lot of acute suffering as “merely symptoms” that aren’t worth our attention.

  2. ^

    I doubt anyone will read this and say “Oh I never thought of this, what a revelation!” – but since I started to think more about the issue, I definitely see it in the world around me much more than before, including in cases I previously wouldn’t have recognized as suboptimal.

New Comment