Frequently, I'll be having an argument with someone. And I'll think "Grr! They are doing Obnoxious Behavior X!" or "Arg, they aren't doing Obviously Good Behavior Y!".
Then I complain at them.
And... sometimes, they make the exact same complaint about me.
And then I think about it, and it turns out to be true.
Another portion of the time, they don't complain back at me, but the argument goes into circles and doesn't resolve, and we both feel frustrated for awhile. And later, independently, I realize "Oh, I was also failing to do Good Thing X, or doing Bad Thing Y."
Often, "Good Thing X" and "Bad Thing Y" amount to some kind of "not listening", or "not doing enough interpretive labor." It seems to me that I'm explaining something reasonable, and they're not understanding it because of some obvious bias, which should be apparent to them.
But, in order for them to notice that, from inside the situation, they'd have to run the check of:
- TRIGGER: Notice that the other person isn't convinced by my argument
- ACTION: Hmm, check if I might be mistaken in some way. If I were deeply confused about this, how would I know?
And, typically, I haven't actually been making that same check, for myself. Or, I've done, it but in a kinda superficial way.
Often, adversarial conversational moves beget more adversarial conversational moves. If someone is talking over me, I'm likely to respond by talking over them. If someone seems to be ignoring my arguments, I'm more likely to ignore their arguments. This means, by the time a complaint arises to my conscious thought, there's a decent chance there's been some kind of escalation cycle where I started doing the same thing, even if they started it. (And, maybe, I started it?)
This has led me to a general habit:
- TRIGGER: Notice that I'm about to complain to someone about a thing they're doing
- ACTION: Check if I'm doing that thing too.
I've mentioned this concept before in passing, but it seemed important enough to warrant it's own top-level post.
I used "flat earthers" as an exaggerated example to highlight the dynamics the way a caricature might highlight the shape of a chin, but the dynamics remain and can be important even and especially in relationships which you'd like to be close simply because there's more reason to get things closer to "right".
The reason I brought up "arrogance"/"humility" is because the failure modes you brought up of "not listening" and "having obvious bias without reflecting on it and getting rid of it" are failures of arrogance. A bit more humility makes you more likely to listen and to question whether your reasoning is sound. As you mention though, there is another dimension to worry about which is the axis you might label "emotional safety" or "security" (i.e. that thing that drives guarded/defensive behavior when it's not there in sufficient amounts).
When you get defensive behavior (perhaps in the form of "not listening" or whatever), cooperative and productive conversation requires that you back up and get the "emotional safety" requirements fulfilled before continuing on. Your proposed response assumes that the "safety" alarm is caused by an overreach on what I'd call the "respect" dimension. If you simply back down and consider that you might be the one in the wrong this will often satisfy the "safety" requirement because expecting more relative respect can be threatening. It can also be epistemically beneficial for you if and only if it was a genuine overreach.
My point isn't "who cares about emotional safety, let them filter themselves out if they can't handle the truth [as I see it]", but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can't regulate at all, and therefore is free to wander without correction.
While people do tend to mirror your cognitive algorithm so long as it is visible to them, it's not always immediately visible and so you can get into situations where you *have been* very careful to make sure that you're not the one that is making a mistake and since it hasn't been perceived you can still get "not listening" and the like anyway. In these kinds of situations it's important to back up and make it visible, but that doesn't necessarily mean questioning yourself again. Often this means listening to them explain their view and ends up looking almost the same, but I think the distinctions are important because of the other possibilities they help to highlight.
The shared cognitive algorithm I'd rather end up in is one where I put my objections aside and listen when people have something they feel confident in, and one where when I have something I'm confident in they'll do the same. It makes things run a lot more smoothly and efficiently when mutual confidence is allowed, rather than treated as something that has to be avoided at all costs, and so it's nice to have a shared algorithm that can gracefully handle these kinds of things.