even when all parties are acting in good faith, they know that they wont be able to reconcile about certain disagreements, and it may seem to make sense, from some perspectives, to try to just impose their own way, in those disputed regions.
Aumann's agreement theorem which is discussed in the paper 'Are Disagreements Honest?' by Hanson and Cowen suggests that perfectly rational agents (updating via Bayes theorem) should not disagree in this fashion, even if their life experiences were different, provided that their opinions on all topics are common knowledge and they have common priors. This is often framed as saying that such agents cannot 'agree to disagree'.
I'm a bit hazy on the details, but broadly, two agents with common priors but different evidence (ie. different life experiences or expertise) can share their knowledge and mutually update based on their different knowledge, eventually converging on an agreed probability distribution.
Of course, humans are not perfectly rational so this rarely happens (this is discussed in the Hanson/Cowen paper). There are some results which seems to suggest you can relax some assumptions of Aumann's theorem to have more realistic assumptions and still get similar results. Scott Aaronson showed that Aumann's theorem holds (to a high degree) even when the agreement of agents over priors isn't perfect and the agents can exchange only limited amounts of information.
Maybe the agents who are alive in the future will not be perfectly rational, but I guess we can hope that they might be rational enough to converge close enough to agreement that they don't fight on important issues.
More detailed comment than mine, so strong upvote. However, there's one important error in the comment:
Of course, humans are not perfectly rational so this rarely happens
Actually it constantly happens. For instance yesterday I had a call with my dad, where I told him about my vacation in Norway, where the Bergen train had been cancelled due to the floods. He believed me, which is an immediate example of Aumann's agreement theorem applying.
Furthermore, there were a bunch of things that I had to do to handle the cancellations, which also relied on Aumann...
I'm a bit hazy on the details, but broadly, two agents with common priors but different evidence (ie. different life experiences or expertise) can share their knowledge and mutually update based on their different knowledge, eventually converging on an agreed probability distribution.
But whether I believe in the info you give depends on my belief in your credibility, and vice versa. So it's entirely possible to exchange information and still end up with different posteriors.
You seem to be looking away from the aspect of the question where any usefully specialized agencies cannot synchronize domain knowledge (which reasserts itself as a result of the value of specialization, an incentive to deepen knowledge differences over time, and to bring differently specialized agents closer together. Though of course, they need to be mutually legible in some ways to benefit from it.). This is the most interesting and challenging part of the question so that was kind of galling.
But the Aaronson paper is interesting. It's possible it addresses it. Thanks for that.
I have thought about similar things, with just humans as the subject. I'm hoping that the overlap is great enough that some of these ideas may be useful.
Firstly, what is meant by conflict? Hostility? more intelligent agents seem to fight less, as the scope which they consider increases. Stupid people have a very small scope, which might even be limited to themselves and the moment in question. Smarter people start thinking about the future, and the environment they're in (as an extension of themselves). Smarter people still seem to dislike tribalism and other conflict between small groups, their scope is larger, often being a superset of at least both groups, which leads to the belief that conflict between these groups is mistaken and only of negative utility. (but perhaps any agent will wish to increase the coherence inside the scope of whatever system they wish to improve)
Secondly, by thinking about games, I have come to the idea of "sportsmanship", which is essentially good-faith conflict. You could say it's competition with mutual benefit. Perhaps this is not the right way to put it, since the terms seem contradicting. Anyway, I've personally come to enjoy interacting with people who are different than me, who think differently and even value differently than me. This can sometimes lead to good-faith conflict akin to what's seen in games. Imagine a policeman who has caught a criminal, only to say "Got me! Impressive, how did you manage?" And for the policeman to say, perhaps "It wasn't easy, you're a sneaky one! You see, I noticed that ...".
I've personally observed a decrease in this "sportsmanship", but it correlates with something like intelligence or wisdom. I don't know if it's a function of intelligence or of humanity (or morality), though.
That said, my own "Live and let live", or even "Play whatever role you want and I will do the same" kind of thinking might be a result of an exotic utility function. My self-actualization has gotten me much higher on the needs-hierarchy than the greed and egoism you tend to see in any person who fear for their own survival/well-being. Perhaps this anti-molloch way of living is a result of wanting to experience life rather than attempting to control/alter it, which is another interesting idea.
Most investigations into this question end up proving something called Aumann's Agreement Theorem, which roughly speaking states that if the different agents correctly trust each other then they will end up agreeing with each other. Maybe there's some types of knowledge differences which prevent this once one deviates from ideal Bayesianism, but if so it is not known what they are.
Nope! If we're optimizing the merger, we'd see that problem coming and install whatever transhumanism is necessary to avert this.
It's not necessarily going to be seen as a problem, it would be seen as an unavoidable inefficiency.
Note, I don't expect the fight to play out. It's a question about what sorts of tensions the conflict resolution processes reflect. This is explained in the question body.
Conjecture: There is no way to simplify the analysis of the situation, or the negotiation process, by paraphrasing an irreconcilable epistemic conflict as a values conflict (there is no useful equivalence between an error theory and a conflict theory). I expect this to be so because the conflict is a result of irreducible complexity in the knowledge sets (and parties inability to hold the same knowledge). So applying another transform to the difference between the knowledge wont give you a clearer image of the disputed borders. You just wont be able to apply the transform.
(note, if true, this would be a useful thing to say to many conflict theorists: By exaggerating difference in material interests, you make your proposals less informed and so less legitimate.)
Been pondering; will conflict always exist? A major subquestion: Suppose we all merge utility functions and form an interstellar community devoted to optimizing the merger. It'll probably make sense for us to specialize in different parts of the work, which means accumulating specialist domain knowledge and becoming mutually illegible.
When people have very different domain knowledge, they also fall out of agreement about what the borders of their domains are. (EG: A decision theorist is insisting that they know things about the trajectory of AI that ML researchers don't. ML researchers don't believe them and don't heed their advice.) In these situations, even when all parties are acting in good faith, they know that they wont be able to reconcile about certain disagreements, and it may seem to make sense, from some perspectives, to try to just impose their own way, in those disputed regions.
Would there be any difference between the dispute resolution methods that would be used here, and the dispute resolution methods that would be used between agents with different core values? (war, peace deals, and most saliently,)
Would the parties in the conflict use war proxies that take physical advantages in different domains into account? (EG: Would the decision theorist block ML research in disputed domains where their knowledge of decision theory would give them a force advantage?)