trying my best
I think logarithmic dependence on epsilon will be difficult, especially for problem one. To demonstrate this, consider the problem of approximating the maximum eigenvalue of a sparse (let's say m = O(n)) PSD matrix with trace at most n. The obvious algorithms to try aren't fast enough:
Intuitively, power iteration-style methods exploit sparsity by taking matrix-vector products exclusively, which yields slow (polynomial in epsilon) convergence.
To argue that problem one is difficult, I claim that one natural candidate completion is the all-zero completion, resulting in a sparse matrix. This suggests (but does not prove!) that the problem of determining existence of PSD completions requires determining whether the resulting sparse completion is PSD - equivalently, whether its minimum eigenvalue is nonnegative. The reduction to the max eigenvalue problem then follows by taking spectral shifts ( for ).
It's unclear to me whether problem two has a similar dependence, but given the similarity of the problems I'm skeptical that logarithmic dependence in epsilon is possible. If anyone has ideas for a similar reduction I'd love to discuss!
(comment based on work co-authored w @Somsubhro Bagchi)
note: Som and I have discussed a similar objection with @paulfchristiano, who seemed ok with solutions with convergence polynomial in epsilon but still O(mn) in the "typical case" (eg. for problem 1, with enough precision to determine (non)existence of PSD completions). After constraining all diagonal elements of A to be 1, we're somewhat optimistic such weaker convergence is attainable.
better: the problem can be phrased as a feasibility problem in semidefinite programming given m linear constraints; no idea if there are (randomized?) algorithms which run fast enough
(possibly trivial) observation: the generalization replacing "entries" with "linear constraints" seems almost as easy? I'd strongly suspect impossibility of the former implies impossibility of the latter (thus showing equivalence)
re: (1), you might be able to phrase the problem as convex optimization in the unknown parameters, which is probably \tilde{O}(n^2)
Upon reflection, I now suspect that the Impact is analogous to Shapley Value. In particular, the post could be reformulated using Shapley values and would attain similar results. I'm not sure whether Impact-scarcity of Shapley values holds, but the examples from the post suggest that it does.
(thanks to TurnTrout for pointing this out!)
Answering questions one-by-one:
We conjecture this? We've only proven limiting cases so far, (constant-sum, and strongly suspected for common-payoff), but we're still working on formulating a more general claim.
Thank you so much for the comments! I'm pretty new to the platform (and to EA research in general), so feedback is useful for getting a broader perspective on our work.
To add to TurnTrout's comments about power-scarcity and the CCC, I'd say that the broader vision of the multi-agent formulation is to establish a general notion of power-scarcity as a function of "similarity" between players' reward functions (I mention this in the post's final notes). In this paradigm, the constant-sum case is one limiting case of "general power-scarcity", which I see as the "big idea". As a simple example, general power-scarcity would provide a direct motivation for fearing robustly instrumental goals, since we'd have reason to believe an AI with goals orthogonal(ish) from human goals would be incentivized to compete with humanity for Power.
We're planning to continue investigating multi-agent Power and power-scarcity, so hopefully we'll have a more fleshed-out notion of general power-scarcity in the months to come.
Also, re: "as players' strategies improve, their collective Power tends to decrease", I think your intuition is correct? Upon reflection, the effect can be explained reasonably well by "improving your actions has no effect on your Power, but a negative effect on opponents' Power".
This comment proposes a strategy for (2) I'll call Active Matrix Completion (AMC), which allows querying of elements of AAT given some cost/budget. In our case, each query takes O(n), so we can afford O(m) total queries.
Some notes:
PROPOSITION: given a graph G = (V, E) with n nodes and m edges, can we efficiently (in O(mn) time) determine a small (with at most Cm edges for universal constant C) chordal completion of G?
The proposition seems "too good to be true", especially since chordal completion seems like a hard computational problem (NP-complete, no great approximations are known), so I'm most interested in easy counterexamples. For instance, a "small" graph with only "large" chordal completions would confirm the falsehood of the proposition.
So, uh, any graph theory experts around?