In [Prediction] We are in an Algorithmic Overhang I made technical predictions without much explanation. In this post I explain my reasoning. This prediction is contingent on there not being a WWIII or equivalent disaster disrupting semiconductor fabrication.
I wouldn't be surprised if an AI takes over the world in my lifetime. The idea makes me uncomfortable. I question my own sanity. At first I think "no way could the world change that quickly". Then I remember that technology is advancing exponentially. The world is changing faster than ever has before and the pace is accelerating.
Superintelligence is possible. The laws of physics demand it. If superintelligence is possible then it is inevitable. Why hasn't we built one yet? There are four[1] candidate limitations:
- Data. We lack sufficient training data.
- Hardware. We lack the ability to push atoms around.
- Software. The core algorithms are too complicated for human beings to code.
- Theoretical. We're missing one or more major technical insights.
We're not limited by data
There is more data available on the Internet than in the genetic code of a human being plus the life experience of a single human being.
We're not (yet) limited by hardware
This is controversial but I believe throwing more hardware at existing algorithms won't bring them to human level.
I don't think we're limited by our ability to write software
I suspect that the core learning algorithm of human beings could be written in a handful of scientific papers comparable to the length and complexity of Einstein's Annus Mirabilis. I can't prove this. It's just gut instinct. If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.
Porting a mathematical algorithm to a digital computer is straightforward. Individual inputs like snake detector circuits can be learned by existing machine learning algorithms and fed into the core learning algorithm.
We are definitely limited theoretically
We don't know how mammalian brains work.
I don't think there's a big difference of fundamental architecture between human brains and e.g. mouse brains. Humans do have specialized brain regions for language like Broca's area but I expect language comprehension would be easy to solve if we had an artificial mouse brain running on a computer.
Figuring out how mammalian brains work would constitute a disruptive innovation. It would re-write the rules of machine learning overnight. The instant this algorithm becomes public it would start a race to an superintelligent AI.
What happens next depends on the the algorithm. If it can be scaled efficiently on CPUs and GPUs then a small group could build the first superintelligence. If sufficient hardware is required then it might be possible to restrict AGI to nation-states the way private ownership of nuclear weapons is regulated. I think such a future is possible but unlikely. More precisely, I predict with >50% confidence that the algorithm will run efficiently enough on CPUs or GPUs (or whatever we have on the shelf) for a venture-backed startup to build a superintelligence on off-the-shelf hardware even though specialized hardware would be far more efficient.
A fifth explanation is we're good at pushing atoms around but our universal computers are too inefficient to run a superintelligence because the algorithms behind superintelligence run badly on the von Neumann architecture. This is a variant on the idea of being hardware limited. While plausible, I don't think it's very likely because universal computers are universal. ANNs may not (always) run efficiently on them but ANNs do run on them. ↩︎
“… do BCI's mean brainwashing for the good of the company? I think most people wouldn't want to work for such a company.”
I think this is a mistake lots of people make when considering potentially dystopian technology: that dangerous developments can only happen if they’re imposed on people by some outside force. Most people in the US carry tracking devices with them wherever they go, not because of government mandate, but simply because phones are very useful.
Adderall use is very common in tech companies, esports gaming, and other highly competitive environments. Directly manipulating reward/motivation circuits is almost certainly far more effective than Adderall. I expect the potential employees of the sort of company I discussed would already be using BCIs to enhance their own productivities, and it’s a relatively small step to enhancing collaborative efficiency with BCIs.
The subjective experience for workers using such BCIs is probably positive. Many of the straightforward ways to increase workers’ productivity seem fairly desirable. They’d be part of an organisation they completely trust and that completely trusts them. They’d find their work incredibly fulfilling and motivating. They’d have a great relationship with their co-workers, etc.
Brain to brain politicking is of course possible, depending on the implementation. The difference is that there’s an RL model directly influencing the prevalence of such behaviour. I expect most unproductive forms of politicking to be removed eventually.
Finally, such concerns are very relevant to AI safety. A group of humans coordinated via BCI with unaligned AI is not much more aligned than the standard paper-clipper AI. If such systems arise before superhuman pure AI, then I expect them to represent a large part of AI risk. I’m working on a draft timeline where this is the case.