In my experience, constant-sum games are considered to provide "maximally unaligned" incentives, and common-payoff games are considered to provide "maximally aligned" incentives. How do we quantitatively interpolate between these two extremes? That is, given an arbitrary payoff table representing a two-player normal-form game (like Prisoner's Dilemma), what extra information do we need in order to produce a real number quantifying agent alignment?
If this question is ill-posed, why is it ill-posed? And if it's not, we should probably understand how to quantify such a basic aspect of multi-agent interactions, if we want to reason about complicated multi-agent situations whose outcomes determine the value of humanity's future. (I started considering this question with Jacob Stavrianos over the last few months, while supervising his SERI project.)
Thoughts:
- Assume the alignment function has range or .
- Constant-sum games should have minimal alignment value, and common-payoff games should have maximal alignment value.
- The function probably has to consider a strategy profile (since different parts of a normal-form game can have different incentives; see e.g. equilibrium selection).
- The function should probably be a function of player A's alignment with player B; for example, in a prisoner's dilemma, player A might always cooperate and player B might always defect. Then it seems reasonable to consider whether A is aligned with B (in some sense), while B is not aligned with A (they pursue their own payoff without regard for A's payoff).
- So the function need not be symmetric over players.
- The function should be invariant to applying a separate positive affine transformation to each player's payoffs; it shouldn't matter whether you add 3 to player 1's payoffs, or multiply the payoffs by a half.
The function may or may not rely only on the players' orderings over outcome lotteries, ignoring the cardinal payoff values. I haven't thought much about this point, but it seems important.EDIT: I no longer think this point is important, but rather confused.
If I were interested in thinking about this more right now, I would:
- Do some thought experiments to pin down the intuitive concept. Consider simple games where my "alignment" concept returns a clear verdict, and use these to derive functional constraints (like symmetry in players, or the range of the function, or the extreme cases).
- See if I can get enough functional constraints to pin down a reasonable family of candidate solutions, or at least pin down the type signature.
There's a difference between "the agent sometimes makes mistakes in getting what it wants" and "the agent does the literal opposite of what it wants"; in the latter case you have to wonder what the word "wants" even means any more.
My understanding is that you want to include cases like "it's a fixed-sum game, but agent B decides to be maximally aligned / cooperative and do whatever maximizes A's utility", and in that case I start to question what exactly B's utility function meant in the first place.
I'm told that Minimal Rationality addresses this sort of position, where you allow the agent to make mistakes, but don't allow it to be e.g. literally pessimal since at that point you have lost the meaning of the word "preference".
(I kind of also want to take the more radical position where when talking about abstract agents the only meaning of preferences is "revealed preferences", and then in the special case of humans we also see this totally different thing of "stated preferences" that operates at some totally different layer of abstraction and where talking about "making mistakes in achieving your preferences" makes sense in a way that it does not for revealed preferences. But I don't think you need to take this position to object to the way it sounds like you're using the term here.)