Aydin Mohseni is a scientific philosopher and Bayesian epistemologist specializing in evolutionary game theory and complexity theory. His research focuses on questions in metascience and AI safety.
Currently, he is an assistant professor in the Department of Philosophy and a core member of the Institute for Complex Social Dynamics at Carnegie Mellon University.
Some years ago, he was a Peace Corps volunteer in Morocco, stationed near أزرو.
His Erdős number is 3 along the path: Brian Skyrms → Persi Diaconis → Paul Erdős.
That’s exactly right. Results showing that low-rationality agents don’t always converge to a Nash equilibrium (NE) do not provide a compelling argument against the thesis that high-rationality agents do or should converge to NE. As you suggest, to address this question, one should directly model high-rationality agents and analyze their behavior.
We’d love to write another post on the high-rationality road at some point and would greatly appreciate your input!
Aumann & Brandenburger (1995), “The Epistemic Conditions for Nash Equilibrium,” and Stalnaker (1996), “Knowledge, Belief, and Counterfactual Reasoning in Games,” provide good analyses of the conditions for NE play in strategic games of complete and perfect information.
For games of incomplete information, Kalai and Lehrer (1993), “Rational Learning Leads to Nash Equilibrium,” demonstrate that when rational agents are uncertain about one another’s types, but their priors are mutually absolutely continuous, Bayesian learning guarantees in-the-limit convergence to Nash play in repeated games. These results establish a generous range of conditions—mutual knowledge of rationality and mutual absolute continuity of priors—that ensure convergence to a Nash equilibrium.
However, there are subtle limitations to this result. Foster & Young (2001), “On the Impossibility of Predicting the Behavior of Rational Agents,” show that in near-zero-sum games with imperfect information, agents cannot learn to predict one another’s actions and, as a result, do not converge to Nash play. In such games, mutual absolute continuity of priors cannot be satisfied.