Today we have a number of approaches to coordination---we sign contracts, create firms with multiple shareholders, vote in democracies, and so forth. I think the starting point for multiple AIs interacting with multiple humans is:
(And regardless of what happens on the object level, AIs and humans will continue improving our methods for cooperation/governance/oversight.)
When I think about this topic I'm mostly interested in ways that this "default" falls short (or may be unrealistic/impractical).
AI may also facilitate new forms of cooperation; those might be needed to cope with new challenges or the new pace introduced by AI, or may result in an improvement over the status quo. Some versions of this:
Whether or not (or for however long) the "default" is adequate to avoid existential catastrophe, it seems useful to use AI as an opportunity to improve our coordination. In some sense "most" of that work will presumably be done by AI systems, but doing the work ourselves may unlock those benefits much earlier. That may be critically important if the transition to AI creates a ton of chaos before we have AI systems who are much better than us at designing new cooperative arrangements. (This is fairly similar to the situation with alignment, where we could also wait to delegate the problem to AI but that may be too late to avoid trouble.)
I think both sets of bullets (multi-multi (eco?)systems either replicating cooperation-etc-as-we-know-it or making new forms of cooperation etc) are important, I think I'll call them prosaic cooperation and nonprosaic cooperation, respectively, going forward. When I say "cooperation etc." I mean cooperation, coordination, competition, negotiation, compromise.
You've provided crisp scenarios, so thanks for that!
In some sense "most" of that work will presumably be done by AI systems, but doing the work ourselves may unlock those benefits much earlier.
But ...
I haven't read Critch in-depth, so I can't guarantee I'm pointing towards the same concept he is. Consider this a bit of an impromptu intuition dump, this might be trivial. No claims on originality of any of these thoughts and epistemic status "¯\_(ツ)_/¯"
The way I currently think about it is that multi-multi is the "full hard problem", and single-single is a particularly "easy" (still not easy) special case.
In a way we're making some simplifying assumptions in the single-single case. That we have one (pseudo-cartesian) "agent" that has some kind of definite (or at least bounded-ly complicated) values that can be expressed. This means we kind of have "just" the usual problems of a) expressing/extracting/understanding the values, in so far as that is possible (outer alignment) and b) making sure the agent actually fulfills those values (inner alignment).
Multi principals then relaxes this assumption into saying we don't have a "single" function, but multiple, which introduces another "necessary ingredient": Some kind of social choice theory "synthesis function", that can take in all the individual functions and spit out a "super utility function" that represents some morally acceptable amalgamation of the other functions (whatever that means). The single case is a simpler special case in that the synthesis function is the equivalent of the identity function, but that no longer works if you have multiple inputs.
In a very simplistic sense, multi is "harder" because we are introducing an additional "degree of freedom". So you might argue we have outer alignment, inner alignment and "even-more-outerer alignment" or "multi-outer alignment" (which would be the synthesis problem), and you probably have to make hard (potentially irreconcilable) moral choices for at least the latter (probably for all).
In multi-multi, if the agents serve (or have different levels of alignment towards) different subsets of principals, this would then add the additional difficulty of game theory between the different agents and how they should coordinate. We can call that the "multi-inner alignment problem" or something, the question of how to get the amalgamation of competing agents to be "inner aligned" and not blow everything up and getting stuck in defect-defect spirals or whatever. (This reminds me a lot of what CLR works on)
I tbh am not sure if single-multi would be harder/different from single-single just "applied multiple times". Maybe if the agents have different ideas of what the principal wants they could compete, but that seems like a failure of outer alignment, but maybe it would be better cast as a kind of failure of "multi-inner alignment".
So in summary I think solutions (in so far as such a thing even exists in an objective fashion, which it may or may not) to the multi-multi problem are a superset of solutions to multi-single, single-multi and single-single. Vaguely, outer alignment = normativity/value learning, inner alignment = principal agent problem, multi-outer alignment = social choice, multi-inner alignment = game theory, and you need to solve all four to solve multi-multi. If you make certain simplifying assumptions which correspond to introducing "singles", you can ignore one or more of these (i.e. a single agent doesn't need game theory, a single principal doesn't need social choice).
Or something. Maybe the metaphor is too much of a stretch and I'm seeing spurious patterns.
I wrote out the 2x2 grid you suggested in MS paint
I'm not sure I'm catching how multi-inner is game theory. Except that I think "GT is the mesa- of SCT" is an interesting, reasonable (to me) claim that is sort of blowing my mind as I contemplate it, so far.
Critch's comments support an opinion I've held since I started thinking seriously about alignment: that the language we use to describe it is too simple, and ignores the fact that "human" interests (the target of alignment) are not the monolith they're usually presented as.
For your specific question about multi-multi, I only have limited access to the memeplex, so I'll just share my thoughts. Multi-multi delegation involves:
1. Compromise / resolution of conflicts of interest between delegators.
2. Mutual trust in delegators regarding communication of interests to delegatees.
3. Equitable control between delegators. This could be lumped in with conflicts of interest, but deserves special attention.
4. Capacity for communication and cooperation between delegatees.
... and some other aspects I haven't thought of. As far as I can see, though, the most important issues here would be addressed by consideration of single-multi and multi-single; multi-multi-specific problems will only be relevant when there are obstacles to communication between either delegators or delegatees (a conceivable future problem, but not a problem as long as the complexity of systems actually being constructed is limited).
Thanks! Trust, compromise, and communication are all items in Dafoe et. al. 2020, if you're interested in exploring. I agree that primitive forms of these issues are present in multi-single and single-multi, it's not clear to me whether we should think of solving these primitive forms then solving some sort of extension to multi-multi or if we should think of attacking problems that are unique to multi-multi directly. It's just not clear to me which of those better reflects the nature of what's going on.
I think multi-multi is really hard to think about. One of the first steps I'm taking to get less confused about it is to scrutinize claims or opinions that I've encountered in the wild.
Context
Critch & Krueger 2020 primarily discuss delegation, which is described as "when some humans want something done, those humans can delegate responsibility for the task to one or more AI systems." (p19). Delegation is in fact the composition of three subproblems; comprehension ("the human ability to understand how an AI system works and what it will do"), instruction ("the human ability to convey instructions to an AI system regarding what it should do"), and control ("the human ability to retain or regain control of a situation involving an AI system, especially in cases where the human is unable to successfully comprehend or instruct the AI system via the normal means intended by the system’s designers"). The four flavors of delegation are single-(human principal)/single-AI-(system), single-(human principal)/multi-AI-(systems), multi-(human principals)/single-AI-(system), and multi-(human principals)/multi-AI-(systems).
Naturally, the traditional "alignment problem" is roughly single-single delegation, especially single-single control. Aspects of single-multi can be made sense of in light of Dafoe et. al. 2020, and aspects of multi-single can be made sense of in light of Baum 2020, but it's difficult to find even minimal footholds in multi-multi.
Here's some notes from Critch about what makes multi-multi problematic:
Claims, opinions
I have two, I'd like your help in expanding this list.
If you have an opinion that you think might be yours that you don't think qualifies as "in the memeplex", I hope you share it anyway! I'm also really happy if you pontificate about intuitions you have or bottlenecks you see here in the answers. In general, fitting your answer into my project is my problem, not yours. This is also an invitation for you to DM me what confuses you about multi-multi, why you think it might be hard or easy, etc.