Earlier this year ARC posted a prize for two matrix completion problems. We received a number of submissions we considered useful, but not any complete solutions. We are closing the contest and awarding the following partial prizes:

  • $500 to Elad Hazan for solving a related problem and pointing us to this paper 
  • $500 to Som Bagchi and Jacob Stavrianos for their analysis in this comment.
  • $500 to Shalev Ben-David for a reduction to computing the gamma 2 norm.

Our main update from running this prize is that these problems are hard and there’s probably not a simple solution we are overlooking. My current guess is that it’s possible to achieve a polynomial dependence on the precision , but not the logarithmic dependence we desired; even this weaker result seems like it will be challenging.

Thanks to everyone who took time to think about this problem.

What this means for ARC

In this section I’ll try to briefly describe the relationship between these problems and heuristic estimators. I’ll use the context and notation from this talk. I don’t expect this discussion to be detailed enough to be meaningful to anyone who doesn’t already have a lot of context on ARC’s work, and I think most readers should wait to engage until we publish a more extensive research update next year.

One of ARC’s main activities this year has been refining our goals for heuristic estimators by finding algorithms, finding evidence for hardness, and clarifying what properties are actually needed for our desired alignment applications. This contest was part of that process.

In early 2023 ARC hoped to find an estimator such that for any matrix and any argument , the heuristic estimate would be a non-negative quadratic function of . The two problems we proposed are very closely related to achieving this goal in the special case where computes a sparse set of entries of . We now expect that it will be algorithmically difficult to ensure that is a non-negative quadratic; as a result, we don't expect this property to be satisfied by the kind of natural heuristic estimator we're looking for.

We made a related update based on another result: Eric Neyman proved that unless , there is no fast estimator that satisfies our other desiderata together with the property whenever proves that for all . Instead, the best we can hope for is that whenever is a proof that for a particular value of x.

We now expect to make a similar relaxation for these matrix completion problems. Rather than requiring that is nonnegative for all vectors v, we can instead require that is non-negative whenever proves that for the particular vector . We don't expect to be a quadratic function of because of the appearance of on the right hand side.

We still expect to be a quadratic function in (this follows from linearity) and therefore to correspond to some completion of . However we no longer expect to be PSD. Instead all we can say is that we don’t yet know any direction v such that . The completion will change each time we consider a particular direction v, after which it will be guaranteed that .

New to LessWrong?

New Comment