Been pondering; will conflict always exist? A major subquestion: Suppose we all merge utility functions and form an interstellar community devoted to optimizing the merger. It'll probably make sense for us to specialize in different parts of the work, which means accumulating specialist domain knowledge and becoming mutually illegible.
When people have very different domain knowledge, they also fall out of agreement about what the borders of their domains are. (EG: A decision theorist is insisting that they know things about the trajectory of AI that ML researchers don't. ML researchers don't believe them and don't heed their advice.) In these situations, even when all parties are acting in good faith, they know that they wont be able to reconcile about certain disagreements, and it may seem to make sense, from some perspectives, to try to just impose their own way, in those disputed regions.
Would there be any difference between the dispute resolution methods that would be used here, and the dispute resolution methods that would be used between agents with different core values? (war, peace deals, and most saliently,)
Would the parties in the conflict use war proxies that take physical advantages in different domains into account? (EG: Would the decision theorist block ML research in disputed domains where their knowledge of decision theory would give them a force advantage?)
I have thought about similar things, with just humans as the subject. I'm hoping that the overlap is great enough that some of these ideas may be useful.
Firstly, what is meant by conflict? Hostility? more intelligent agents seem to fight less, as the scope which they consider increases. Stupid people have a very small scope, which might even be limited to themselves and the moment in question. Smarter people start thinking about the future, and the environment they're in (as an extension of themselves). Smarter people still seem to dislike tribalism and other conflict between small groups, their scope is larger, often being a superset of at least both groups, which leads to the belief that conflict between these groups is mistaken and only of negative utility. (but perhaps any agent will wish to increase the coherence inside the scope of whatever system they wish to improve)
Secondly, by thinking about games, I have come to the idea of "sportsmanship", which is essentially good-faith conflict. You could say it's competition with mutual benefit. Perhaps this is not the right way to put it, since the terms seem contradicting. Anyway, I've personally come to enjoy interacting with people who are different than me, who think differently and even value differently than me. This can sometimes lead to good-faith conflict akin to what's seen in games. Imagine a policeman who has caught a criminal, only to say "Got me! Impressive, how did you manage?" And for the policeman to say, perhaps "It wasn't easy, you're a sneaky one! You see, I noticed that ...".
I've personally observed a decrease in this "sportsmanship", but it correlates with something like intelligence or wisdom. I don't know if it's a function of intelligence or of humanity (or morality), though.
That said, my own "Live and let live", or even "Play whatever role you want and I will do the same" kind of thinking might be a result of an exotic utility function. My self-actualization has gotten me much higher on the needs-hierarchy than the greed and egoism you tend to see in any person who fear for their own survival/well-being. Perhaps this anti-molloch way of living is a result of wanting to experience life rather than attempting to control/alter it, which is another interesting idea.