Yet the CIA used these tactics effectively to prevent groups from achieving their aims.
I've heard the CIA quote a few times and it feels reasonable (matches my worldview) but wondering if anyone's actually checked that it was really effective.
Anecdotally I've found that it's generally on point.
I think a lot of these tactics are a form of ego-pandering that distracts people from what their organizations are meant to achieve (elaborate speeches, doing things through "proper channels" that serves to give more people a voice, etc.). I've been in several organizations where decisions take forever to be made, circulating between individuals and committees with no one really holding a final say in making the decision, waiting for some form of consensus to arrive (which it never truly does). This again just gives people more and more of an excuse to insert themselves in discussions that are happening often to the detriment of actually moving things along.
Successful organizations maximize the formula shown below:
Meaningful OutputResources Spent
As an organization, you have two levers to improve: increase meaningful output while using the same amount of resources, or maintain the same meaningful output while decreasing the amount of resources spent. When companies hire people, they’re hoping that their meaningful output will increase far more than the increase in cost, and when companies conduct layoffs, they’re hoping that their meaningful output will reduce much less than their resources.
Few things frustrate me more than bureaucratic organizations. They completely butcher the formula above. Within the mess of systems, committees, and processes, the production of meaningful output becomes impossible, and sustaining this complexity requires an immense amount of resources.
The worst part is that they’re extremely static and difficult to change. This is because:
Organizations don’t just naturally become bureaucratic as they grow; they become bureaucratic as a result of deliberate decisions made by leaders who at the time believed they were doing the right thing.
The CIA provides evidence that bureaucracy can be built through distinct actions. Here are a few guidelines from their handbook on how to disrupt activist organizations:
In the abstract, a lot of these actions seem benign!
Yet the CIA used these tactics effectively to prevent groups from achieving their aims.
I think that leadership mistakes that cause bureaucracy to develop come in two flavors:
poorly designed incentive systems
*I’m paraphrasing the work of Scott Alexander and Jason Crawford here
Institutional Review Boards (IRBs) are panels that review the ethics of medical studies. They’re a relatively recent thing. Before the advent of IRBs, doctors were encouraged to independently study topics of interest without any guidelines for how these studies were to take place. Many ethically ambiguous studies took place. For example, during World War II, researcher James Shannon made tremendous strides in malaria research (which he eventually won a Presidential Order of Merit for), but his research involved injecting mentally ill patients with malaria.
Eventually, in 1974, after egregious studies like the Tuskeegee Syhpillys Experiment, IRBs were instituted to ensure studies followed certain ethical standards, like getting informed consent from patients. This worked well until 1998 when a patient died during an asthma study at Johns Hopkins, after which all studies at Johns Hopkins and several other research studies were shut down. Thousands of studies were ruined, and millions of dollars were wasted.
Nowadays, IRB oversight has become a straightjacket. Scott details his experience:
He summarizes his thoughts on the bureaucratic regulation here:
Initially created to protect patients from being taken advantage of and support doctors in study design, the IRB has now become a monolith, strangling researchers and imposing a significant cost on the medical industry.
How did we get here?
Jason describes the process like this:
It’s a case of poorly designed incentives. The IRB receives no reward for enabling or supporting research, and it receives no recognition when the research directly saves or improves many lives. Instead, it is given all the blame for any harm that comes to patients. This is especially damaging during a time when everyone is lawyering up; mistakes are extremely dangerous for the IRB and the medical industry. This means they have an incentive to prevent any mistake from happening even if it comes at the cost of good research.
While the IRB is an extreme case, I think every organization has seen the installation of processes and systems after a mistake to eliminate mistakes from happening. It’s basically standard corporate vernacular to say “This wouldn’t have happened if a better system was in place.” Everyone wants a new system or process.
Tying this back to the formula at the beginning, typically the new systems and processes will consume significant additional resources while only marginally improving the meaningful output produced by the organization. In the case of the IRB, it’s adding billions of dollars and innumerable hours in cost while harming research velocity with the outcome of saving a small number of lives. I agree with Scott - this is a poorly optimized equation.
Instead, I think a better approach is to have a different mentality around mistakes. I really like the concept of statistical process control (SPC). The goal of SPC is to improve quality by reducing variance. It understands certain outcomes as simple variance, and the goal of the few processes in place is to reduce variance while improving key organizational metrics.
I think the focus on metrics and outcomes helps fight back against the tendency to add reviews and approval processes everywhere, only adding them in situations where they directly and meaningfully improve outputs (i.e. we’re doing this because it helps us achieve our purpose, instead of we’re doing this because we’re afraid of making mistakes). I know it’s a subtle distinction, but I think it’s a meaningful one. It requires teams to be really, really intentional about what their goals are and ruthlessly stick to them. It takes effort, but I think it’s worth it.
loose coupling of systems with desired outcomes
Everyone has been in meetings where the purpose is unclear, the meeting leader waxes poetically, and you leave feeling dumber and less motivated than you did at the start. This is especially true of recurring meetings; very rarely are they an effective use of time. This is an example of a system that’s not aligned with a desired outcome. If you run a meeting in which there’s no clear value returned from it, you’ve invested organizational resources while not increasing meaningful output. This is the opposite of what you should be doing. Cardinal sin!
Here are a couple more examples:
Sometimes, the answer to fixing a poorly designed system is to just tweak the existing system. In the example of meetings, perhaps the meeting is necessary to get everyone on the same page, but the lack of a prepared agenda and post-meeting recap is preventing that meeting from being useful. Those are easy changes that flip the equation and get the activity back on track to being something meaningful.
Other times, the answer is to simply remove it with the understanding that either the problem that the system was meant to solve no longer exists, or that the problem exists but the system is doing nothing to improve it.
I think there are 4 components to effectively designed systems:
The key here is that these systems have to be malleable. Just because it worked for a short period doesn’t mean it will work forever! Perhaps the problem no longer exists, or perhaps the organizational goals have changed. Any change in the surrounding context requires you to reassess the dependent systems and tweak them accordingly. If you don’t do this, you’ll end up stuck in an organization where people do things all the time for unintelligible reasons in ways that are not tied at all to meaningful output. This isn’t a rare occurrence at all.
None of this is easy. To do this right, you have to deeply understand your organization and its purpose. You have to be deliberate with your actions and intentions, frequently revisiting things and tweaking them as the surrounding context changes. You have to be constantly aware of the state of your organization and the habits and systems that are being built.
To make it harder, it is not sufficient to do something that is a “best practice”. You must marry the right action with the right context because if it’s not coupled well, you’re on the way to creating a bureaucratic maze.
Reasons for doing things matter. Purpose and intentions matter. The problems you’re solving for matter. You must be critically aware of all of these factors while building an organization, or else you’ll be flying blind into a storm of your own creation.
You can read more of my writing here.