Controlling is especially toxic when applied to the self-model of another person, Non-Violent Communication's ability to defuse conflict comes mostly from requiring people to talk in a way which makes making control-statements about the other's self model or unobservable variables difficult.
You do need to have enough trust that the other agent won't use the fact that you'll hear them out and model them in more detail to disrupt you in ways you're not willing to risk, but assuming good faith on the meta layer is often (though not always) safe.
I disagree with the framing that "disrupt" is something you do to someone else, rather than something you allow people to do to you. Proper humility and honest epistemics is protective against this kind of disruption (e.g. "You say that I'm fat, and I'm very insecure about this so it scares me to think of the implications, but I also know that I'm not able to figure out what they are with any reliability so I'm not going to jump to any conclusions that might be harmfully false. Maybe that means my crush won't like me, but I don't actually know so I'm still gonna ask), and people you don't take seriously can't disrupt you because you just don't take them seriously ("You're fat!", "lol"). Getting disrupted is a consequence of your belief that the person is sharing information worth attending to and your mismanagement of this information. Any system that tries to pin responsibility on someone else for their own belief system is attempting a deliberately brittle strategy which is going to conflict hard with reality, in ways that lie outside the self imposed limitations to their FOV.
That's not to say that with a sufficiently advanced model of a person, you can't deliberately disrupt someone, but the considerations go the other way. The better model you have of someone, the easier it is to recognize when they're trying to mislead you. The better model you have of them, the more you can punish them for stepping out of line. When you see them with sufficient clarity, and get them to sign off that you've passed their ITT, and can find the contradiction so clearly that you can ask with genuine curiosity "How do you square this? Doesn't that look kinda bad?", then you can disrupt the shit out of them (which is why feeling seen and feeling intimidated can go hand in hand). And if they don't open to you, and don't model you, they can't touch you.
It's not trust in the other that is required, but trust in one's own epistemics. If you can trust yourself, and don't trust the other guy to not try some shenanigans, that's when it's most important to drop the attempts to control and look them in the eye with intent to see. "I can't trust that other guy [it's his failure not mine]" doesn't actually work as an excuse, so when opening is hard we know we have some work to do.
That said, I otherwise agree that this is all true and important (strong upvote).
How's this for a crux: How much of the variance in whether person A disrupts person B is explained by each variable?
Variance is a statistical property of populations, and what I'm saying isn't about statistical properties of populations.
Say for example we have a monoculture Uniformistan, where everyone is virtually identical. The culture has very strong norms against creating disruption, so everyone tries really hard to avoid saying anything that will disrupt another person. People find this community very pleasant to be in, since no one ever says anything offensive, no one ever tells them they smell bad, etc. Well, pleasant on this front, at least.
Then we import Susie from a different culture where telling the truth is valued and disruptions are seen as necessary parts of life and what matters is recovery. She tells you that you smell bad -- as gently and tactfully as she can, but she tells you. She tells everyone this, since y'all stopped showering once your fears of negative social feedback were quelled by culture. This is why you all think "Man, everyone else stinks", and the culture isn't so pleasant on that front.
Now all the variance is contained in "Are you talking with Susie-The-Disruptor?" and none of it in "Are you taking responsibility for making sure you can handle the truth?" -- because no one does that. The more people start to take responsibility for their own epistemics, the less the variance is explained by Susie-The-Disruptor Susie-The-Truth-Teller. In Susie's home town everyone is a truth teller, and few people stink, but there is varying skill in handling the more difficult information that comes after you've handled personal hygiene, so the ratio of variance is very very different.
The variance depends on the population, yet the dynamics are the same everywhere. The point I'm making holds in Uniformistan, even as your whole culture assures you that Susie-The-Disruptor is the problem because conflict follows her, and that you're perfect in every way because you're just like them.
In the terms of the post, the culture in Uniformistan attempts to Control conflicts in Control, rather than Opening to the conflicts as necessary to actually resolve them. It's a control spiral spiralling meta (for which the next move is "YOU CAN'T CENSOR FREE SPEECH!"!)
That cartoon of the maze could be made into a nice little collaborative mini-game.
Not sure what that would be useful for, but it's just something that came to mind.
Nice pattern of using an AI block inside of a collapsable section, and using it to clarify/examplify what you mean in a slightly different voice.
tl;dr: with multiple agents, control attempts tend to create conflict, because control attempts shut down communications channels, which leads to feedback loops in the form of intensifying tug-of-war over variables. intentionally relaxing control to better understand the other agents can break the cycle, and forms the basis of many therapeutic and mediation techniques.
[epistemic status: mostly quite confident this is real and a common source of large amounts of suffering+blind spot, but i'm making a fair few claims i'm not giving the full justification and reasoning trace for here, please check this against your experience and try it out rather than expecting me to prove this]
Control in multiplayer settings
When multiple agents try to control[1] the same variable to different set points, they don't just waste resources in zero-sum competition, they also tend to close up their information-sharing surfaces in a way that blinds them both to a wider space of possibilities.
Each agent's attempts to adjust reality to their goal-models land as painful prediction error for another who has different preferences, leading to "weaken the other agent" as an instrumentally convergent goal. When incoming information might be an attack vector, closing communication channels becomes instrumentally convergent,[2] but those very channels were the ones needed to accurately model each other and notice better ways forward!
If neither agent steps outside the narrow frame of optimizing that variable, conflict can consume arbitrary cognitive resources while it continues, with the cycle of conflict ending only when one agent is subjugated and becomes, in at least the relevant domain, effectively a managed subagent. This is often true even when there are solutions which both agents would have been very happy with, but there was not enough shared context about each other's preference landscapes to locate them.
Opening - An escape from control cycles
Whenever you notice your cognitive slack is being eaten by conflict and that damn variable won't stay where you put it, and you have enough slack to handle some potentially disruptive incoming data or trust that the other agent won't defect on a resolution process,[3] consider opening instead.
Any agent can step up to de-escalate a control cycle. Simply moving into curiosity, genuinely trying to understand why the other might want something different (aka passing their Ideological Turing Test), is often enough. With both your own and the other's perspective contained in one brain, it's often straightforward to see third ways that are strongly positive-sum.
Sometimes a more formal process is helpful, like in mediation where both parties have a place to speak, be heard, and hear that they have been heard. It can also be helpful to hold just the true end-state in mind, putting aside your current best plan for achieving that outcome.
Examples at different scales
Interpersonal: Classic household tension—dirty dishes in the sink.
Resources go to forcing compliance, not understanding what's actually happening for the other person.
Resources shift to understanding what's actually going on for both of you. This isn't merely "being nicer"—it's a different information-processing strategy that enables discovering solutions neither party saw initially.
Intrapersonal: When we notice an unwanted emotion, the default is often direct control:
One subsystem overrides another; the suppressed one amplifies its signals; resources deplete in the struggle. This is why control often intensifies the states it tries to suppress.
This creates space for information exchange across internal boundaries. Techniques like Focusing and Internal Family Systems work precisely by replacing control with curiosity toward internal resistance.
Societal: Consider US gun policy polarization. Urban populations directly experience community gun violence, quick police response, limited legitimate use cases. Rural populations directly experience guns as practical tools, long police response times, responsible gun culture. Each side's model includes mechanisms the other hasn't experienced.
After shootings, urban groups try to control "gun availability"; rural groups perceive tools they see as essential threatened and amplify resistance; urban groups see this as confirming the danger. Each cycle increases certainty in each group's model while reducing ability to hear the other's actual concerns—the same pattern as the interpersonal and intrapersonal examples, at societal scale.
"But the variable is in the wrong place!"
If you try to optimize over an agent which cares about things you're not aware of or mindful of, you will by default set things they care about to states which are extreme and undesirable to them. Or, as Stuart Russell puts it:
The harder you optimize, the more extreme this will be. A better informed version of yourself would also likely not approve of those outcomes, especially if you value that system being effective, healthy, and cognitively flexible or have shared goals.
Agents which seem to be resisting you are caring for something. Remaining open to information about what they are caring for is necessary to avoid both conflict and catastrophic failures of myopic optimization and control spirals.
This doesn't speak against trying to shift variables, just against blindly forcing through resistance from other agents you want to be healthy. Variables are often in the wrong place! It's just that other agents often have relevant information you don't. Resistance is a signal the agent you're meeting has something to offer your models, if you make space to receive it.
The Evolution of Integration
Local incentives seem to convergently drive control spirals across many scales of agency, from the intrapersonal dynamics that Internal Family Systems works with, through personal relationships, all the way up to superagents on the scale of social movements and political parties.
On the bright side, the cost imposed by these control spirals creates pressure for the emergence of meta-systems which bring the warring subsystems together under a framework that encompasses both to help integrate that conflict.[4]
Slightly more precisely: When different agentic processes select among possible futures they are modelling, but select for futures with different values for some property they both care about. In the agency as time-travel frame, there's a tension between the tugs towards different futures.
If two computer hackers are trying to break into each other's devices, they'll both use firewalls to restrict incoming information!
You do need to have enough trust that the other agent won't use the fact that you'll hear them out and model them in more detail to disrupt you in ways you're not willing to risk, but assuming good faith on the meta layer is often (though not always) safe.
such as legal systems, principles frameworks for mediation or communication, therapeutic techniques, meditation practices, the SSC culture war comments threads, etc