Papers about LLM scaling law projections don't make an attempt to estimate base rate of technological progress.

In particular, they don't make an attempt to estimate the base rate of false alarms when inventing new science and technology. If you study any hard science (not computer science) you realise how high this rate is. False alarms are the norm and older scientists grow cynical partly because of how many false alarms by newer scientists they need to debunk. By false alarm I mean someone claiming a new technology will be world-changing, only to later find out the result was faked, or had experimental error, or won't work outside of lab conditions, or won't scale, or will be too expensive, or any other reason.

Anyone remember LK99? That is the norm for people inside the space, it's just not the norm for normies on twitter hence it blew up.

New Comment
2 comments, sorted by Click to highlight new comments since:

AI isn't really new technology though, right? Do you have evidence of alarmists around AI in the past?

And do you have anecdotes of intelligent/rational people being alarmist about a technology that turned out to be false?

I think these pieces of evidence/anecdotes would strengthen your argument.

What is your estimated timeline for humanity's extinction if it continues on its current path?

What information are you using for the foundation of your beliefs around the progress of science & technology?

I definitely agree that specific examples would make the argument much stronger. At least, it would allow me to understand what kind of "false alarms" are we talking about here: is it mere tech hypes (such as cold fusion), or specifically humanity-destroying events (such as nuclear war)?

I think we didn't have so many things that threatened to destroy humanity. Maybe it's just my ignorance speaking, but the nuclear war is the only example that comes to my mind. (Global warming, although possibly a great disaster, is alone not an extinction threat to entire humanity.) And mere tech hypes that didn't threaten to destroy humanity don't seem like a relevant category for the AI danger.

Perhaps more importantly, with things like LK99 or cold fusion, the only source of hype was "people enthusiastically writing papers". With AI, the situation is more like "anyone can use (for free, if it's only a few times a day) a technology that would be considered sci-fi five years ago". Like, the controversy is about how far and fast it will get, but there is no doubt that it is already here... and even if somehow magically the state of AI would never improve beyond where it is today, we would still have a few more years of social impact at more people would learn to use it and find new ways how to use it.

EDIT: By "sci-fi" I mean, imagine creating a robotic head that uses speech recognition and synthesis to communicate with humans, uploading the latest LLM into it, and sending it by a time machine five or ten years into the past. Or rather, sending thousands of such robotic heads. People would be totally scared (not just because of the time travel). And finding out that the robotic heads often hallucinate would only calm them down a little.