It's proven that mutually-known rational agents with matching priors cannot disagree: https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem . This doesn't even require exchange of evidence, just the fact that each other is trusted to be rational is enough to move their beliefs to alignment.
It's well-known that this doesn't apply to humans, which are not fully rational, and don't have matching priors (or, really, any consistent priors). I think this kills your thesis as well.
In that case, both parties would have the same information which will then be processed the same way. Just by these factors, there shouldn't be.
Priors are still a problem. But that's not the biggest problem.
However, disagreements do still exist, and we'd like to believe we're rational, so the problem must be in the exchange of information.
WHAT? The problem may include difficulty in exchange of information, but you can't blindly and silently jump from "we'd like to believe we're rational" to "we are, in fact, rational". The biggest problem is that humans are NOT rational, and anyone who thinks otherwise is deluded.
As we progress as a species we expand our languages to communicate more complexity
Perhaps true, but this doesn't lead to cost-free instantaneous perfect information exchange (aka brain-state merging of two individuals). Leaving aside the time and complexity issues, there are adversarial drives that make self-interested agents (like humans) PREFER imperfect communication.
I agree. My main point is not that we're rational yet we disagree. But even as we strive to be rational in the future, we can still disagree due to imperfections in language. Perfect communication doesn't entail complete revelation of brain states, as with perfect communication humans can still be selective as to what to communicate, so self interest wouldn't be a major problem.
I have no clue how to determine which of the following contribute how much to any given disagreement:
My intuition is that #5 is pretty far down the list, ESPECIALLY if you separate "imperfect understanding of context of the speaker and their reasons for communication choices" as a new bullet point.
This leads me to believe that language does evolve, but it won't make much of a dent in the conflict and disagreement we see among people.
Suppose rationality is a set of principles that people agreed on to process information then arrive at conclusions. Then, on the basis of cost-free information exchange, should rational disagreements still exist? In that case, both parties would have the same information which will then be processed the same way. Just by these factors, there shouldn't be.
However, disagreements do still exist, and we'd like to believe we're rational, so the problem must be in the exchange of information. Previous posts have mentioned how sometimes there are too much background information to be exchanged fully. Here I'd like to point to a more general culprit: language.
Not all knowledge can be expressed through language, and not all languages express knowledge. Yet language, including obscure symbols that take in mathematics, n order logic, and other communicable disciplines, still so far cannot convey a significant portion of our knowledge, such as intuition and creativity. Substantial amount of studies have shown that intuition is more accurate than thinking in certain areas, and much worse in other areas. Yet we have not came up with a way to systematically use intuition and rational judgement selectively.
And I'd say this is the obstacle in most rationalist disagreements: it's not that when they can freely discuss for as long as possible then they will definitively agree; it's that there is knowledge unique to themselves that is incommunicable, but that considerably swayed their judgements of things. As we progress as a species we expand our languages to communicate more complexity, so this issue should gradually fade away, that is unless the scale of complexity of knowledge is infinite.