The primary purpose of this blog post is to create a public record. Some technical insights are deliberately omitted.
I think we're in an algorithmic AI overhang.
- Today's neural networks (including GPT) are so data inefficient they will not strictly outcompete human brains' performance in all domains (excluding robotics, which is hardware-limited) no matter how much data we shove into them and how big we scale them.
- The human brain uses a radically different core learning algorithm that scales much better as a function of its training data size.
- The core learning algorithm of human beings could be written in a handful of scientific papers comparable to the length and complexity of Einstein's Annus Mirabilis.
- Once the mathematics behind the human brain's learning algorithms are made public, they will be running at scale on silicon computers in less than 10 years.
- Within 10 years of getting these algorithms to scale, they will be cheap enough for a venture-backed startup to run them at a scale outstripping the smartest human alive―assuming civilization lasts that long. (A world war could destroy the world's semiconductor fabricators.)
I've been thinking about this idea for a long time. What finally pushed me to publish were two fire alarms in sequence. First, a well-respected industry in the leader of AI stated in a private conversation that he believed we were algorithmically limited. Secondly, Steven Byrnes wrote this post. The basilisk is out of Pandora's Box.
Part 2 here
Maybe worth pointing out that "hardware overhang" is a pretty old (>10years) and well known term that afaik was not coined by Steven Byrnes. So your title must be confusing to quite a lot of people.