e upshot of higher-order game theory — the nash equilibria between optimisers is itself an optimiser!
Isn't this pretty trivial though? I guess it's still probably convenient for the math.
The observation is trivial mathematically, but it motivates the characterisation of an optimiser as something with the type-signature.
You might instead be motivated to characterise optimisers by...
However, were you to characterise optimisers in any of the ways above, then the nash equilibrium between optimisers would not itself be an optimiser, and therefore we lose compositionality. The compositionality is conceptually helpfully because it means that your definitions/theorems reduce to the case.
Wait I mean a quantifier in .
If we characterise an agent with a quantifier , then we're saying which payoffs the agent might achieve given each task. Namely, if and only if it's possible that the agent achieves payoff when faced with a task .
But this definition doesn't play well with a nash equilibria.
Thank you, this is incredibly interesting! Did you ever write up more on the subject? I'm excited to see how it relates to mesa-optimisation in particular.
In the finite case, where , then
Typo: I think you mean ?
I would have liked a footnote saying .
I'm confused about the last example, EDT-satisficer vs. CDT-maximizer. It says that we assume the agents to choose the same options. Accordingly, it says to "filter out the option profiles such that ". But then, in the concrete case of the Prisoner's dilemma, it says the solution is instead of .
Exercise 7:
In the Prisoner's dilemma, the answer doesn't change compared to the one in the worked-out example, because in that example the satisficer was already anchoring to the best EDT solution . So the Nash equilibrium is . More in general, the first agent condition is replaced by .
4. The -best-response function of is the the function defined by where is the th deviation map
Here , right?
Yes, , i.e. the cartesian product of a family of sets. Sorry if this wasn't clear, it's standard maths notation. I don't know what the other commenter is saying.
Got it, I misunderstood the semantics of what was supposed to capture. I thought the elements needed to be mutual best-responses. Thank you for the clarification, I've updated my implementation accordingly!
I interpreted it the standard way too initially, but then I had a hunch there was... I dunno, something fishy, and then indeed it turned out @StrivingForLegibility understood it in a completely different way, so somehow it wasn't clear! Magic.
Edit: Cleo Nardo has confirmed that they intended to mean the cartesian product of sets, the ordinary thing for that symbol to mean in that context. I misunderstood the semantics of what was intended to represent. I've updated my implementation to use the intended cartesian product when calculating the best response function, the rest of this comment is my initial (wrong) interpretation of .
I needed to go back to one of the papers cited in Part 1 to understand what that was doing in that expression. I found the answer in A Generalization of Nash's Theorem with Higher-Order Functionals. I'm going to do my best to paraphrase Hedges' notation into Cleo's notation, to avoid confusion.
TLDR: is picking out the set of option-profiles that are simultaneously best-responses by all players to that option-profile . It does this by considering all of the option-profiles that can result by each player best-responding, then takes the intersection of those sets.
On page 6, Hedges defines the best response correspondence
Where
Hedges builds up the idea of Nash Equilibria using quantifiers rather than optimizers, (like rather than ), but I believe the approaches are equivalent. Unpacking from the inside out:
That makes ) a -task. Since , we know that .
This is where I had to go looking through papers. What sort of product takes a set of best-responses from each player, relative to a given option-profile, and returns a set of option-profiles that are simultaneously regarded by each player as a best-response? I thought about just taking the Cartesian product of the sets, but that wouldn't get us only the mutual best-responses.
Let's call the way that each player maps option-profiles to best-responses . This is exactly the sets we want to take the product of:
Hedges introduces notation on page 3 to handle the operation of taking an option-profile, varying one player's option, and leaving the rest the same. Paraphrasing, Hedges defines by
You can read as "give me a new copy of , where the th entry has been set to the value ." Hedges uses this to define the deviation maps equivalently to the way Cleo did.
The correspondences take as input an option profile, and returns the set of option-profiles which are player 's optimal unilateral deviations from that option profile. To construct from , we want to map to the option-profiles which deviate from in those exact ways.
We can then use Hedges' to get the best-response correspondence! We can unpack this to get a definition of using objects that Cleo defined, using that deviation notation from Hedges:
Thank you Cleo for writing this article! This was my first introduction to Higher-Order Game Theory, and I wrote up an implementation in TypeScript to help me understand how all of the pieces fit together!
I'm weirded out by this. To look at everything together, I write the original expression, and your expression rewritten using the OP's notation:
Original:
Yours:
(I'm using the notation that a function applied to a set is the image of that set.)
So the big pi symbol stands for
So it's not a standalone operator: it's context-dependent because it pops out an implicit . The OP otherwise gives the impression of a more functional mindset, so I suspect the OP may mean something different from your guess.
Other problem with your interpretation: it yields the empty set unless all agents consider doing nothing an option. The only possible non-empty output is . Reason: each set you are intersecting contains tuples with all elements equal to the ones in , but for one. So the intersection will necessarily only contain tuples with all elements equal to those in .
Edit: Cleo Nardo has confirmed that they intended to mean the cartesian product of sets, the ordinary thing for that symbol to mean in that context. I misunderstood the semantics of what was intended to represent. I've updated my implementation to use the intended cartesian product when calculating the best response function, the rest of this comment is based on my initial (wrong) interpretation of .
I write the original expression, and your expression rewritten using the OP's notation:
Original:
Yours:
(I'm using the notation that a function applied to a set is the image of that set.)
This is a totally clear and valid rewriting using that notation! My background is in programming and I spent a couple minutes trying to figure out how mathematicians write "apply this function to this set."
I believe the way that is being used is to find Nash equilibria, using Cleo's definition 6.5:
Like before, the -nash equilibria of is the set of option-profiles such that .
These are going to be option-profiles where "not deviating" is considered optimal by every player simultaneously. I agree with your conclusion that this leads to take on values that are either or . When , this indicates that is not a Nash equilibrium. When , we know that is a Nash equilibrium.
Oh I see now, just needs to work to pinpoint Nash equilibria, I did not make that connection.
But anyway, the reason I'm suspicious of your interpretation is not that your math is not correct, but that it makes the OP notation so unnatural. The unnatural things are:
So I guess I will stay in doubt until the OP confirms "yep I meant that".
isn't equivalent to being Nash.
Suppose Alice and Bob are playing prisoner's dilemma. Then the best-response function of every option-profile is nonempty. But only one option-profile is nash.
is equivalent to being Nash.
Written during the SERI MATS program under the joint mentorship of John Wentworth, Nicholas Kees, and Janus.
Motivation
I am surrounded, incessantly, by other people, possessing a range of different capabilities, striving for goals both common and opposing — and, what is more, they are incessantly surrounded by me. Fangs pierce the cartesian boundary. Outsideness leaks in. Insideness leaks out. It is the source of all joy and misery in my life.
Anyway. It's the interaction of n≥2 agents that characterises game theory.[1] Classical game theory says that when two utility-maximisers interact in a shared environment, they will choose options in nash equilibrium, where a profile of options is in nash equilibrium if no player can increase their utility function by unilaterally changing their option.
What does higher-order game theory (which dispenses with utility functions and maximisation) say about the interaction between agents? Would a satisficer cooperate with a quantiliser in prisoner's dilemma?
Let's find out.
Preliminaries
All you need to know is Definition 3 from [Part 1], and some basic classical game theory.
Let's review the textbook definition of classical simultaneous games.
Note that the fact that the option-profiles are in nash equilibrium follows logically from the assumption each player's choice is utility-maximising for the task they face, plus the assumption that, if the option-profile is x∈X, then the ith player faces the task πi∘g∘Ui(x):Xi→R.
It doesn't require the assumption that the player's have "common knowledge" of each other's rationality, or that the players have any kinds of beliefs whatsoever. That being said, an economist might explain the optimality of the choices by assuming that each player is rational and knows all the relevant facts (i.e. the rules of the game, the rationality of their peers, and the beliefs of their peers). However, an evolutionary biologist might appeal to natural selection to explain optimality. An ML engineer might appeal to SGD, etc.
Higher-Order Nash Equilibrium
By inspection, we can see that at no point does the definition of the classical nash equilibrium appeal to any special properties of R or argmaxX. This suggests that we can effortless generalise the definition of nash equilibrium to any family of optimisers, rather than only utility-maximisers.
The intuition is this —
This definition applies even to unusual cases —
We obtain the classical simultaneous game when R=Rn and ψi=argmaxX(πi∘−)∈JP(X,Rn), and in this case the higher-order nash equilibrium coincides with the classical nash equilibrium.
The higher-order nash equilibrium allows us to answer somewhat esoteric questions —
Many well-studied variations of the classical nash equilibrium can be seen as the higher-order nash equilibrium between non-classical optimisers. For example, epsilon-equilibrium is precisely the higher-order nash equilibrium between argmax-ϵ-slackX(πi∘−).[4]
Optimisation is nash-sum closed
There's a nice upshot of higher-order game theory — the nash equilibria between optimisers is itself an optimiser!
Suppose that two agents are modelled by ψA∈JP(A,R) and ψB∈JP(A,R). We can combine them into a single function which assigns, to each game g:A×B→R, the set of {ψA,ψB}-nash equilibria of the game g. This function has type-signature (A×B→R)→P(A×B), so it corresponds to a single optimiser with option space (A×B) with payoff space R.
This gives us a binary operator on optimisers,⊕:JP(A,R)×JP(B,R)→JP(A×B,R), given by (a,b)∈(ψA⊕ψB)(g)⟺a∈ψA(g∘U1(a,b)),b∈ψB(g∘U2(a,b)).
This operator is both associative and commutative, justifying the name "nash sum".
For example, the nash sum (argmaxA⊕argmaxB):(A×B→R×R)→P(A×B) will calculate the classical nash equilibria of a generic payoff matrix g:A×B→Rn.
The operator can be extended to any family of optimisers, i.e. ⨁i∈I:∏i∈IJP(Xi,R)→JP(∏i∈IXi,R) where x∈(⨁i∈Iψi)(g)⟺x∈n∏i=1ψi(g∘Ui(x)).
Terminology:
Totality is not nash-sum closed
Just because two optimisers ψa∈JP(A,R) and ψb∈JP(B,R) are total does not imply that their nash sum (ψa⊕ψb)∈JP(A×B,R) is total.[5] In other words, we may have a model of an agent which works perfectly in all solitary situations, but breaks down when many such agents interact in a shared environment.
The textbook example is Matching Pennies. Each player must choose either heads or tails — player 1 wins if the coins match and player 2 wins if they don't. If both players are utility-maximisers then there's no nash equilibria, even though utility-maximisers are total optimisers.
Formally speaking, A={H,T}, B={H,T}, and u:A×B→R×R,(x,y)↦(δxy,1−δxy).[6] Then (argmaxA(π1∘−)⊕argmaxB(π2∘−))(u)=∅, although argmaxA(π1∘−)(w)≠∅ and argmaxB(π1∘−)(w)≠∅ for all w:{H,T}→R×R.
This shows why we couldn't have defined optimisers as functions with type-signature (X→R)→P∗(X) where P∗(X) denotes the set of non-empty subsets of X. In order to ensure nash-sum closure of optimisers, we needed to include non-total optimisers.
Utility-maximisation is not nash-sum closed.
It is well-known that the nash sum of n utility-maximisers isn't a utility-maximiser for any n≥2. For many games, none of the nash equilibrium are even pareto-optimal!
The textbook case is the prisoner's dilemma. Let B={C,D} and consider two playersargmaxB(π1∘−) and argmaxB(π2∘−) which respectively value the first and second coordinate of the payoff. Their nash sum is the optimiser in JP(B2,R2) which calculates the classical nash equilibria of 2-by-2 payoff matrices u:B2→R2. In particular, when u:B2→R2 is the payoff matrix defined below, then (argmaxB(π1∘−)⊕argmaxB(π2∘−))(u)={(D,D)}. However, both players prefer the payoff u(C,C)=(2,2) to u(D,D)=(1,1).
Consequentialism is not nash-sum closed.
In fact, the nash sum of two utility-maximisers isn't even consequentialist![7]
Suppose Angelica is babysitting her cousin Tommy, and each child has been given a cookie by their grandmother. If Angelica is awake while Tommy sleeps, then she can steal both cookies for herself. If Angelica sleeps while Tommy is awake then he'll cause mayhem getting both children in trouble. If Angelica and Tommy both stay sleep, or if they both wake up, then they'll each keep one cookie.
The payoffs u:{WA,SA}×{WT,ST}→R×R are summarised in the matrix below.
When Angelica and Tommy are both awake, this is nash — she doesn't want to sleep because then he'll cause mayhem, and he doesn't want to sleep because then she'll steal his cookie. In contrast, when they're both sleeping, this isn't nash — she can wake up and steal his cookie. But the same outcome will result whether Angelica and Tommy are both sleeping or both awake. So the nash sum of Angelica and Tommy isn't a consequentialist optimiser![8]
This shows why we couldn't have defined optimisers as functions with type-signature (X→R)→P(R), which bakes-in the assumption that all optimisers are consequentialist. In order to ensure the nash-sum closure of optimisers, we needed to include nonconsequentialist optimisers.[9]
Beyond CDT?
Warning: This section is a speculative extension of the existing literature, it's merely a proof-of-concept of how higher-order game theory would extend to non-CDT agents.
Let u:{C,D}×{C,D}→R×R be the game defined by the payoff matrix below. We may readily check that (argmax{C,D}⊕argmax{C,D})(u)={(1,1)}, which corresponds to the well-known result that two utility-maximisers will defect in the prisoners' dilemma.
Now, some decision theorists think that it's not generally true that two rational utility-maximisers will defect in the prisoners' dilemma — e.g. if they're both perfect replicas of one another then they'll both cooperate! The reasoning is this — cooperation from the first prisoner would count as evidence (to the first player) of high utility, and defection from the first prisoner would count as evidence of low utility. This corresponds to the observation that π1∘u(C,C)>π1∘u(D,D). So the first player should cooperate. Ditto the second player.
So what went wrong?
It all comes down to an ambiguity with the task u:X→R. What relationship is the task u supposed to capture exactly? The formalism itself is agnostic.
So let's explain our answer (argmax{C,D}⊕argmax{C,D})(u)={(1,1)}, which agrees with CDT. We can trace the problem to the deviation maps Ui:X→(Xi→X) in the definition of ⊕.
The map Ui(x) is the causal dependence of the overall option-profile on the ith player's choice. Hence, when the game u:X→R is the causal dependence of the overall payoff on the option-profile, it follows that the task u∘Ui(x):Xi→R is the causal dependence of the payoff on the ith player's choice. This is why we got the CDT answer — causation composed with causation is causation.
According to EDT, however, the task is supposed to capture the evidential dependency of the player's choice on the payoff. And when the game u:X→R is the evidential dependence of the overall payoff on the option-profile, it isn't true that that the task u∘Ui(x):Xi→R is the evidential dependence of the payoff on the i-th player's choice. Evidence composed with causation isn't evidence.
To achieve the EDT answer, we need is a function UEi(x):Xi→X which captures the evidential dependence of the option-profile on the ith player's choice. In the case that all the players are replicas, UEi(x):Xi→X,α↦(α,…,α). Replacing Ui:X→(Xi→X) in the definition of ⨁ with UEi:X→(Xi→X) then we'd get a different binary operator ⨁E which yields the EDT prediction, namely that (argmaxA⊕EargmaxB)(u)={(C,C)}.
It seems that we have another free parameter in our model! We can the replace the family of deviation maps Ui:X→(Xi→X) with an arbitrary family of non-casual deviation maps Di:X→(Xi→X), and thereby encode information about each player's idiosyncratic decision theory. If the ith player is CDT then Di=Ui and if the ith player is EDT then Di=UEi.
Suppose we have a payoff space R, a family of sets {Xi}i∈I with X=∏i∈IXi, a family of optimisers {ψi}i∈I where ψi∈JP(Xi,R), and a family of decision theories {Di}i∈I where Di:X→(Xi→X), and a game is a map g:X→R. Then we can define the non-CDT best-response function of g to be the function B:X→P(X),x↦n∏i=1ψi(u∘Di(x)), and the non-CDT nash equilibria of g to be the set {x∈X:x∈B(x)}. Or something like that.
With this machinery in place, we can answer even sillier questions —
Recap
Pretty cool!
Next time...
By tomorrow, I will hopefully have posted about mesa-optimisation, sequential choice, and non-possibilistic models of agents.
They study of 0-player games is called automata theory (in the discrete case) and dynamics (in the continuous case), while the study of 1-player games is called optimisation.
Recall that JP(X,R) denotes the set of functions of type (X→R)→P(X).
That is, if u:X→R and ψ∈JP(X,R), then ψ(u)⊆X.
See Aumann's 1964 paper Markets with a Continuum of Trader for a game with uncountable players.
Recall that argmax-ϵ-slack∈JP(X,R) is the optimiser which maximises the function u up to some fixed slack ϵ>0.
That is, x∈argmax-ϵ-slack(u)⟺∀x′∈X.u(x′)≤u(x)+ϵ.
Recall that an optimiser ψ∈JP(X,R) is total if ψ(u)≠∅ for all u:X→R.
δxy={1 if x=y0otherwise is the Kronecker delta function.
Recall that an optimiser ψ∈JP(X,R) is consequentialist if ψ=λu:X→R.{x∈X|u(x)∈q(u)} for some q:(X→R)→P(R).
In other words, for any task u:X→R, if u(x)=u(x′) then x∈ψ(u)⟺x′∈ψ(u).
This condition says that, once we know the agent's task, then the optimality of a particular choice is determined by its payoff.
Maybe there's something deep here about group-level deontology arising from individual-level consequentialism.
Warning: There is a remark that one regularly encounters in metaethics: moral consequentialism is tautological because we can consider the choice of the decision-maker as a consequence of the decision itself, ensuring that any decision-rule is consequentialist.
This metaethical remark corresponds to the following fact in higher-order decision theory: if a nonconsequentialist optimiser ψ∈JP(X,R) faces the task u:X→R then they will choose the same options as the consequentialist optimiser ψ(πR∘−)∈JP(X,X×R) facing the task u′:X→X×R,x↦(x,u(x)). It follows that whether an agent is a consequentialist heavily depends on which physical consequences of the decision we are tracking.
Recall that a satisficer might choose any option x∈X which dominates some fixed option s∈X, i.e. x∈satisfices(u)⟺u(s)≤u(x). The option s is called the anchor point.