In my experience, constant-sum games are considered to provide "maximally unaligned" incentives, and common-payoff games are considered to provide "maximally aligned" incentives. How do we quantitatively interpolate between these two extremes? That is, given an arbitrary payoff table representing a two-player normal-form game (like Prisoner's Dilemma), what extra information do we need in order to produce a real number quantifying agent alignment?
If this question is ill-posed, why is it ill-posed? And if it's not, we should probably understand how to quantify such a basic aspect of multi-agent interactions, if we want to reason about complicated multi-agent situations whose outcomes determine the value of humanity's future. (I started considering this question with Jacob Stavrianos over the last few months, while supervising his SERI project.)
Thoughts:
- Assume the alignment function has range or .
- Constant-sum games should have minimal alignment value, and common-payoff games should have maximal alignment value.
- The function probably has to consider a strategy profile (since different parts of a normal-form game can have different incentives; see e.g. equilibrium selection).
- The function should probably be a function of player A's alignment with player B; for example, in a prisoner's dilemma, player A might always cooperate and player B might always defect. Then it seems reasonable to consider whether A is aligned with B (in some sense), while B is not aligned with A (they pursue their own payoff without regard for A's payoff).
- So the function need not be symmetric over players.
- The function should be invariant to applying a separate positive affine transformation to each player's payoffs; it shouldn't matter whether you add 3 to player 1's payoffs, or multiply the payoffs by a half.
The function may or may not rely only on the players' orderings over outcome lotteries, ignoring the cardinal payoff values. I haven't thought much about this point, but it seems important.EDIT: I no longer think this point is important, but rather confused.
If I were interested in thinking about this more right now, I would:
- Do some thought experiments to pin down the intuitive concept. Consider simple games where my "alignment" concept returns a clear verdict, and use these to derive functional constraints (like symmetry in players, or the range of the function, or the extreme cases).
- See if I can get enough functional constraints to pin down a reasonable family of candidate solutions, or at least pin down the type signature.
Sorry, I think I wasn't clear about what I don't understand. What is a "strategy profile (like stag/stag)"? So far as I can tell, the usual meaning of "strategy profile" is the same as that of "strategy", and a strategy in a one-shot game of stag hunt looks like "stag" or "hare", or maybe "70% stag, 30% hare"; I don't understand what "stag/stag" means here.
----
It is absolutely standard in game theory to equate payoffs with utilities. That doesn't mean that you have to do the same, of course, but I'm sure that's why Dagon said what he did and it's why when I was enumerating possible interpretations that was the first one I mentioned.
(The next several paragraphs are just giving some evidence for this; I had a look on my shelves and described what I found. Most detail is given for the one book that's specifically about formalized 2-player game theory.)
"Two-Person Game Theory" by Rapoport, which happens to be the only book dedicated to this topic I have on my shelves, says this at the start of chapter 2 (titled "Utilities"):
Unfortunately, Rapoport is using the word "payoffs" to mean two different things here. I think it's entirely clear from context, though, that his actual meaning is: you may begin by specifying monetary payoffs, but what we care about for game theory is payoffs as utilities. Here's more from a little later in the chapter:
A bit later:
and:
As I say, that's the only book of formal game theory on my shelves. Schelling's Strategy of Conflict has a little to say about such games, but not much and not in much detail, but it looks to me as if he assumes payoffs are utilities. The following sentence is informative, though it presupposes rather than stating: "But what configuration of value systems for the two participants -- of the "payoffs", in the language of game theory -- makes a deterrent threat credible?" (This is from the chapter entitled "International Strategy"; in my copy it's on page 13.)
Rapoport's "Strategy and Conscience" isn't a book of formal game theory, but it does discuss the topic, and it explicitly says: payoffs are utilities.
One chapter in Schelling's "Choice and Consequence" is concerned with this sort of game theory; he says that the numbers you put in the matrix are either arbitrary things whose relative ordering is the only thing that matters, or numbers that behave like utilities in the sense that the players are trying to maximize their expectations.
The Wikipedia article on game theory says: "The payoffs of the game are generally taken to represent the utility of individual players." (This is in the section about the use of game theory in economics and business. It does also mention applications in evolutionary biology, where the payoffs are fitnesses -- which seem to me very closely analogous to utilities, in that what the evolutionary process stochastically maximizes is something like expected fitness.)
Again, I don't claim that you have to equate payoffs with utilities; you can apply the formalism of game theory in any way you please! But I don't think there's any question that this is the usual way in which payoffs in a game matrix are understood.
----
It feels odd to me to focus on response functions, since as a matter of fact you never actually know the other player's strategy. (Aside from special cases where your opponent is sufficiently deterministic and sufficiently simple that you can "read their source code" and make reliable predictions from it. There's a bit of an LW tradition of thinking in those terms, but I think that with the possible exception of reasoning along the lines of "X is an exact copy of me and will therefore make the same decisions as I do" it's basically never going to be relevant to real decision-making agents because the usual case is that the other player is about as complicated as you are, and you don't have enough brainpower to understand your own brain completely.)
If you are not considering payouts to be utilities, then you need to note that knowing the other player's payouts -- which is a crucial part of playing this sort of game -- doesn't tell you anything until you also know how those payouts correspond to utilities, or to whatever else the other player might use to guide their decision-making.
(If you aren't considering that they're utilities but are assuming that higher is better, then for many purposes that's enough. But, again, only if you suppose that the other player does actually act as someone would act who prefers higher payouts to lower ones.)
My feeling is that you will get most insight by adopting (what I claim to be) the standard perspective where payoffs are utilities; then, if you want to try to measure alignment, the payoff matrix is the input for your calculation. Obviously this won't work if one or both players behave in a way not describable by any utility function, but my suspicion is that in such cases you shouldn't necessarily expect there to be any sort of meaningful measure of how aligned the players are.