chinchilla's wild implications
(Colab notebook here.) This post is about language model scaling laws, specifically the laws derived in the DeepMind paper that introduced Chinchilla.[1] The paper came out a few months ago, and has been discussed a lot, but some of its implications deserve more explicit notice in my opinion. In particular: * Data, not size, is the currently active constraint on language modeling performance. Current returns to additional data are immense, and current returns to additional model size are miniscule; indeed, most recent landmark models are wastefully big. * If we can leverage enough data, there is no reason to train ~500B param models, much less 1T or larger models. * If we have to train models at these large sizes, it will mean we have encountered a barrier to exploitation of data scaling, which would be a great loss relative to what would otherwise be possible. * The literature is extremely unclear on how much text data is actually available for training. We may be "running out" of general-domain data, but the literature is too vague to know one way or the other. * The entire available quantity of data in highly specialized domains like code is woefully tiny, compared to the gains that would be possible if much more such data were available. Some things to note at the outset: * This post assumes you have some familiarity with LM scaling laws. * As in the paper[2], I'll assume here that models never see repeated data in training. * This simplifies things: we don't need to draw a distinction between data size and step count, or between train loss and test loss. * I focus on the parametric scaling law from the paper's "Approach 3," because it's provides useful intuition. * Keep in mind, though, that Approach 3 yielded somewhat different results from Approaches 1 and 2 (which agreed with one another, and were used to determine Chinchilla's model and data size). * So you should take the exact numbers below with a grain of salt. They may be o
(Disclaimer: I wrote one of the linked posts)
The linked critiques are mainly about Anthropic's "Agentic Misalignment" report.
As described in that report, the "point" of the reported experiment was not... (read 423 more words →)