A lot depends on whether this is a high-bandwidth discussion/debate, or an anonymous post/read of public statements (or, on messages boards, somewhere in between). In the interactive case, Alice and Bob could focus on cruxes and specific points of agreement/disagreement. In the public/semi-public case, it's rare that either side puts that much effort in.
I'll also note that a lot of topics on which such disagreements persist are massively multidimensional and hard to quantify degree of closeness, so "agreement" is very hard to define. No two humans (and likely no two distinct real agents) have identical priors, so Aumann's Agreement Theorem doesn't apply - they don't HAVE to agree.
And finally, it's not clear how important the disagreements are, compared to the dimensions where the distance is small (near-agreement). Intellectuals focus on the disagreement, both because it's the interesting part, and because that's where some amount of status comes from. A whole lot of these disagreements end up having zero practical impact. Though, of course, some DO matter, and it's a whole separate domain of disagreement which dimensions are important to agree on...
I'm talking specifically about discussions on LW. Of course in reality Alice ignores Bob's comment 90% of the time, and that's a problem in it's own right. It would be ideal if people who have distinct information would choose to exchange that information.
I picked a specific and reasonably grounded topic, "x-risk", or "the probability that we all die in the next 10 years", which is one number, so not hard to compare, unless you want to break it down by cause of death. In contrived philosophical discussions, it can certainly be hard to determine who agrees on what, but I have a hunch that this is the least of the problems in those discussions.
A lot of things have zero practical impact, and that's also a problem in it's own right. It seems to me that we're barely ever having "is working on this problem going to have practical impact?" type of discussions.
Bob spends 5 minutes thinking about x-risk. He's seen a few arguments about it, so he makes an internal model of the problem, accepts some of the arguments, amends some, comes up with counterarguments to others, comes up with arguments of his own. All of this belief-state has extremely large degrees of freedom. At the same time, these beliefs already generate an opinion on a vast number of possible x-risk arguments.
Alice spends 5 hours detailing her beliefs about x-risk in a post. Bob reads it, point by point. He's seen some of those arguments already, he does not update. Some of the arguments are new to him, but they do not surprise him, Bob's current beliefs are enough to reject them, he does not update. The post offers new refutations to some of his accepted beliefs, but he can immediately come up with counter-refutations, he does not update. The post offers refutations to arguments that he has already rejected, and to new arguments that he'd never even consider reasonable, Bob is falling asleep, he does not update. Etc.
Bob leaves a comment with one of his counterarguments to one of Alice's points. They spend hours going back and forth over the next few days. They're explaining their slightly different understanding of the arguments and assumptions, slightly different usage of terms, exchanging long sequences of arguments and counterarguments. Eventually Bob agrees that Alice is right, he updates! But he only updates on that one point in his first comment. He still has many other independent arguments, so the total update to his beliefs about x-risk is tiny.
Many other people read the same post and find it extremely convincing. How did this happen?
And what about Bob? Alice and Bob are both rational, reasonable people. And he had some interest in x-risk to begin with. But their discussion was a miserable waste of time. How did this happen?