Training a LoRA has a negligible cost compared to pre-training a full model because it only involves changing 1.5% to 7% of the parameters (per https://ar5iv.labs.arxiv.org/html/2502.16894#A6.SS1) and only on thousands to millions of tokens instead of trillions.
Inferencing different LoRAs for the same model in large batches with current technology is also very much possible (even if not without some challenges), and OpenAI offers their finetuned models for just 1.5-2x the cost of the original ones: https://docs.titanml.co/conceptual-guides/gpu_mem_mangement/batched_lora_inference
You probably don't need continual learning for a tech support use-case. I suspect you might need it for a task so long that all the reasoning chain doesn't fit into your model's effective context length (which is shorter than the advertised one). On these tasks the inference is going to be comparatively costly just because of the test-time scaling required, and users might be incentivized by discounts or limited free use if they agree that their dialogs will be used for improving the model.
What makes you (and the author) think ML practitioners won't start finetuning/RL'ing on partial reasoning traces during the reasoning itself if that becomes necessary? Nothing in the current LLM architecture prevents that technically, and IIRC Gwern has stated he expects that to happen eventually
hire a bunch of random bright-ish people and get them to spin up LLM-wrapper startups in-house (so that you own 100% stake in them).
I doubt it's really feasible. These startups will require significant infusion of capital so AI companies CEOs and CFOs will have a say on how they develop. But tech CEOs and CFOs have no idea how developments in other industries work and why they are slow so they will mismanage such startups.
P. S. Oh, and also I realized the other day: whether you are an AI agent or just a human, imagine the temptation to organize a Theranos-type fraud if details of your activity are mostly secret and you only report to tech bros believing in the power of AGI/ASI!
Google could still sell those if there's so much demand
Sell to who, competing cloud providers? Makes no sense, Lamborghini doesn't sell their best engines to Ferrari or vice versa!
Also, all this discussion is missing that inference is much easier both hardware and software-wise than training while it was expected long time ago that at some point the market for the former will be comparable and then larger than for the latter
Is it possible Meta just trained on bad data while Google and DeepSeek trained on good? See my two comments here: https://www.lesswrong.com/posts/Wnv739iQjkBrLbZnr/meta-releases-llama-4-herd-of-models?commentId=KkvDqZAuTwR7PCybB
I'm afraid you might have missed the core thesis of my comment, let me reword. I'm arguing one should not extrapolate findings from that paper on what's Meta training now.
The Llama 4 model card says the herd was trained on "[a] mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI": https://github.com/meta-llama/llama-models/blob/main/models/llama4/MODEL_CARD.md To use a term from information theory, these posts probably have much lower factual density than curated web text in C4. There's no public information how fast the loss goes down even on the first epoch of this kind of data let alone several ones.
I generated a slightly more structured write-up of my argument and edited it manually, hope it will be useful
Let's break down the extrapolation challenge:
Conclusion: directly applying the quantitative findings (e.g., "up to 4 epochs is fine", RD* ≈ 15) to the Llama 4 Behemoth scale and potential data mix is highly speculative.
The "Data Wall" Concern: even if Meta could have repeated data based on the 2023 paper's principles, they either chose not to (perhaps due to internal experiments showing it wasn't effective at their scale/data mix) or they are hitting a wall where even 30T unique tokens isn't enough for the performance leap expected from a 2T parameter compute-optimal model, and repeating isn't closing the gap effectively enough.
P. S.
Also, check out https://www.reddit.com/r/LocalLLaMA, they are very disappointed how bad the released models turned out to be (yeah I know that's not directly indicative of Behemoth performance)
Muennighoff et al. (2023) studied data-constrained scaling on C4 up to 178B tokens while Meta presumably included all the public Facebook and Instagram posts and comments. Even ignoring the two OOM difference and the architectural dissimilarity (e. g., some experts might overfit earlier than the research on dense models suggests, perhaps routing should take that into account), common sense strongly suggests that training twice on, say, a Wikipedia paragraph must be much more useful than training twice on posts by Instagram models and especially comments under those (which are often as like as two peas in a pod).
Since physics separated from natural philosophy in the times of Newton, it has almost always[1] progressed when new experimental data uncovered deficiencies in then-current understanding of the universe. During the Cold War unprecedentedly large amount of money were invested into experimental physics, and by the late 20th century all reasonably low hanging fruits have been picked (in the meantime the experiments have got absurdly expensive and difficult). I have also wrote on the topic at https://www.lesswrong.com/posts/CCnycGceT4HyDKDzK/a-history-of-the-future-2025-2040?commentId=KtusJZLAFDt4PW65R and the thread below, check it out.
As of the string theory in particular, it represents just one significant school of thought very popular in the US but other theories share the same problem of lacking the experimental data to test against.
Also, the body of knowledge in physics has become so large that local progress made here and there is not really visible in the grand scheme of things anymore even if it's worth a Nobel Prize (while during the Second Industrial Revolution one discovery could, figuratively speaking, establish a new branch of science)
Two notable exceptions that, IMHO, kind of support the rule are Maxwell's Equations and the General Relativity
I don't think pure mathematics make a good parallel. There are still discoveries made by single mathematicians or very small research groups, but this haven't really been the case in physics since about mid-20th century, when the US and USSR invested lots of money in modern large-scale research done by huge groups
I think regionalisms are better approached systematically, as there are tons of scientific literature on this and even a Wikipedia article with an overview: https://en.wikipedia.org/wiki/American_English_regional_vocabulary (same for accents https://en.wikipedia.org/wiki/North_American_English_regional_phonology but that might require a fundamental study of English phonology)