A funny thing happens with woo sometimes, in the rationality community. There's a frame that says: this is a mix of figurative stuff and dumb stuff, let's try to figure out what the figurative stuff is pointing at and salvage it. Let's call this "salvage epistemology". Unambiguous examples include the rationality community's engagement with religions, cold-reading professions like psychics, bodywork, and chaos magic. Ambiguous examples include intensive meditation, Circling, and many uses of psychedelics.
The salvage epistemology frame got locally popular in parts of the rationality community for awhile. And this is a basically fine thing to do, in a context where you have hyper-analytical programmers who are not at risk of buying into the crazy, but who do need a lens that will weaken their perceptual filters around social dynamics, body language, and muscle tension.
But there's a bad thing happens when you have a group that are culturally adjacent to the hyper-analytical programmers, but who aren't that sort of person themselves. They can't, or shouldn't, take for granted that they're not at risk of falling into the crazy. For them, salvage epistemology disarms an important piece of their immune system.
I think salvage epistemology is infohazardous to a subset of people, and we should use it less, disclaim it more, and be careful to notice when it's leading people in over their heads.
I think the amount of work that clause does is part of what makes the question worth answering...or at least makes the question worth asking.
I'm not a fan of inserting this type of phrasing into an argument. I think it'd be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light. Of course, it's also possibly literally accurate...maybe most claims of the type we're talking about are erroneous and clung to because of the makes-us-feel-superior issue, but I don't think that literally accurate aspect of the argument makes the argument more useful or less of an applause light.
In other words, I don't have access to an argument that says both of these cannot exist:
In either case Group A comes across badly, but in case 2, Group A is right.
If we cannot gather any more information or make any more arguments, it seems likely that case #1 is going to usually be the reality we're looking at. However, we can gather more information and make more arguments. Since that is so, I don't think it's useful to assume bad motives or errors on the part of Group A.
I don't really know. The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1. Maybe you have a good argument that case #2 cannot exist. (I've never heard of a good argument for that.)
eta: I'm not completely satisfied with this comment at this time as I don't think it completely gets across the point I'm trying to make. That being said, I assign < 50% chance that I'll finish rewriting it in some manner so I'm going to leave it as is and hope I'm being overly negative in my assessment of it or at least that someone will be able to extract some meaning from it.