I thought it'd be especially interesting to get critiques/discussion from the LW crowd, because the claims here seem antithetical to a lot of the beliefs people here have, mostly around just how capable and cognizant transformers are/can be.
The authors show that transformers are guaranteed to suffer from compounding errors when performing any computation with long reasoning chains.
From the abstract, "In an attempt to demystify Transformers, we investigate the limits of these models across three representative compositional tasks—multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem. These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer. We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures. Our empirical findings suggest that Transformers solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching, without necessarily developing systematic problem solving skills"
To a non-trivial extent, it vindicates the LLM skeptics of recent fame, like Gary Marcus and Yann Lecun, and generally makes the path for LLMs to be much more constrained in capabilities than we used to believe.
This is both good and bad:
The biggest good thing about this, combined with the twitter talk on LLMs, is that makes timelines quite a bit longer. In particular, Daniel Kokotajlo's model becomes very difficult to sustain without truly ludicrous progress and switching to other types of AI.
The biggest potentially bad thing is that algorithmic progress, and to a lesser extent a change of paradigms becomes more important, and this complicates AI governance, because any adversarial pressure on LLMs is yet another force on AI progress, and while I don't subscribe to standard views on what will happen as a result of that, it does complicate AI governance.