I've looked into this question a little, but not very far. The following are some trailheads that I've have on the list to investigate, when I get around to it. My current estimation is that all of these, are at best, tangential to the problem that I (and it sounds like you) are interested in: getting to the truth of epistemic disagreements. My impression is there are lots of things in the world that are about resolving disputes, but not many people are interested in resolving disputes to get the answer. But I haven't looked very hard.
Nevertheless...
Most of the things that I know about, and seem like they're in the vein of what you want, have come from our community. As you say, there's CFAR's Double Crux. Paul wrote this piece as a precursor to an AI alignment idea. Anna Salamon has been thinking about some things in this space lately. I use a variety of homegrown methods. Arbital was a large scale attempt to solve this problem. I think the basic idea of AI safety via debate is relevant, if only for theoretical reasons (Double Crux makes use of the same principle of isolating the single most relevant branch in a huge tree of possible conversations, but Double Crux and AI safety via debate used different functions for evaluating which branch is "most relevant").
I happened to have written about another framework for disagreement resolution today, but this one in particular is very much in the same family as Double Crux.
Have you come across anything that gives concrete methods for articulating unstated premises?
One of the things certain people with superpowers seem to do in the Feynman-esque tradition of having a list of unusual methods and unusual problems is have a core loop composed of a pretty flexible representation that they try to port everything in to. Then the operations that they have for this representation acts as a checklist and they can look for missing or overdetermined edges between vertices or what have you (in this case a graph, I don't know how peo...
That last link seems to go to the wikipedia article on argument mapping, and not whatever you wrote about today.
instead of searching for things that are about disagreements, I'd look for things that are about creating technical diagrams or other large scale representations of problems and then figure out what aspects are good for doing with 2 people.
This and other pubs within intelligence analysis were useful for me. There might be some stuff written up somewhere on different things that were tried for having superforecaster teams aggregate methods (the stuff I found here was pretty vague, seemed like they tried a lot of stuff and nothing was a grand slam vs the others). Also, judgemental bootstrapping and deliberate practice have overlap.
I've seen many books and schools of thought that seem to be about conflict resolution. Books like Crucial Conversations and Non-Violent Communication. There are multiple parties that want different things, there's strong emotional undertones/overtones, and these books advise you on how to navigate those conflicts and find some sort of common ground and get people what they want.
So far the Double Crux framework is the only thing I've seen that's had the explicit goal of resolving disagreements, especially disagreements about technical topics. Can anyone recommend any other books or bodies of work that have this explicit goal?