In other words, he predicted this to occur around 2023. This year. (when this was originally published)
Nitpick: He said "during the next 30 years" not "in about 30 years." I'd guess his credence in AGI by 2023 would have been something like 75%, if it was lower he would have said 40 years if it was higher he would have said 20. So, still an impressive prediction (he's assigning probably something like 8% credence to it happening in the 2023-2033 period, whereas most experts at the time would probably have assigned less)
Thank you for the clarification! That does seem plausible - it's particularly interesting to read his perspective as one involved in both Science Fiction and Academia (imaginative foresight with scientific grounding)
Crossposted from my Substack: Part One and Part Two
On the face of this paragraph alone, nothing here is particularly shocking. With the rise of discussion and debate about the increasing capability of AI, the possibility of an AGI (Artificial General Intelligence) and existential risk, this feels like one of many opening paragraphs that we’d see on social media.
Plot twist - this essay is from 1993.
Vernor Vinge, who is not only a prolific science fiction writer but a retired professor of computer science and mathematics, seems to be a perfect candidate for this speculative essay; a future thinker grounded in the practicalities and reasoning that comes with a STEM background. I had another look through this essay - I did a stream on Twitch discussing this and am planning to make a YouTube video on the topic in the future - and there are many points that still resonate today. This also needs to be made abundantly clear: when Vinge states that we are the “edge of change”, he clarifies this by declaring “I believe that the creation of greater than human intelligence will occur during the next thirty years”. The abstract to this paper was similarly shocking: “within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” I’m taking the definition of human era to be a period of time where humans are no longer actively shaping the world with the power and control that is does now - rather, the end of the Anthropocene. However, the most striking detail here is when it was written. 1993. In other words, he predicted this to occur around 2023. This year. (when this was originally published)
Your mileage, of course, may vary on this - with conspiracy theories that GPT5 has been achieved, that even GPT4 shows sparks of sentience, the “meme” posted by Sam Altman himself (CEO of OpenAI) that AGI had been internally achieved - this seems to be a matter of opinion whether the sparks of possibility are present, or that it is actually here already but lurking in the shadows, or simply just a pipe dream. However, Vinge goes on to say “I’ll be surprised if this event occurs before 2005 or after 2030.” From this vintage vantage point of the early 90s, this seems like a scarily prescient statement.
Three Possibilities of Progress
So how does he describe this event? He lists three possibilities in which this could happen:
This resonates with recent public discourse. Lately, I feel like arguments tend to skew towards the first point - where developed computers “awaken” with a spark of sentience and/or superhuman intelligence because of the rise of LLMs with increased compute and capabilities. Neuralink’s recent controversies with its animal test subjects could be one of the reasons why BCIs (Brain-Computer Interfaces) have been discussed less as a viable option- but the plan of assimilating with AI in order to guarantee a foot in the existential door is still a compelling idea, I think. Human trials are apparently underway, so we may hear a lot more about this in the near future. At the moment, LLMs and the concept of AI agents appear to have taken the spotlight.
Gotta Go Fast
He also states that: “Another symptom of progress toward the Singularity: ideas themselves should spread even faster, and even the most radical will quickly become commonplace.” I think anyone from 1992 would be shocked but not surprised by how social media has shaped our minds; wire headed with dopamine, instant gratification and extreme reactions. Even though I was only 5 years old in 1992, I can imagine the difference in speed to create a mass impact as opposed to today. This statement is a step towards verifying my claims: “when I began writing science fiction in the middle 60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more than eighteen months.” Cut to 2023 where in internet time, days are equivalent to earth years in terms of the shifts in collective awareness and consensus (for good or ill).
So how was the concept of the Singularity’s speed predicted in 1992 with this in mind? Vinge uses the analogy of the evolutionary past - natural selection being the way in which animals adapt to problems whereas humans can internalise the world and conduct thought experiments or “what-ifs” that allow for faster problem solving. Once simulations of these problems are executed at much higher speeds, the distance between us and this new intelligence will be similar to us with animals. We definitely see this argument today in defence of pausing AI - that our previous track record with the natural world could be replicated in the relationship between AI and ourselves.
I feel Vinge has it spot on with our current situation when he posits: “if the technological Singularity will happen, it will. Even if all the governments of the world were to understand the ‘threat’ and be in deadly fear of it, progress toward the goal would continue […] in fact, the competitive advantage — economic, military, even artistic — of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assumes that someone else will get them first.” We see this now between countries and corporations as they work against the clock to produce more powerful LLMs and potent agents - despite overwhelming public opinion against this rate of acceleration. At the end of the day, no-one wants to be left behind. It’s not a particularly surprising prediction; although I am a little loathe to use this term, it’s the well known pattern of the arms race that we’ve come to expect.
[Expanding on his inclusion of artistic advantage, he mentions that automation will replace higher and higher level jobs - that work that is truly productive will be “the domain of a steadily smaller and more elite fraction of humanity.” He uses the example of comic book writers worrying about creating “spectacular events when everything visible can be produced by the technically common place” - this seems rather appropriate amidst the debates for and against AI generative art.]
Part Two
The second half of this essay deals with alternatives to the singularity in case it cannot happen; he reports of a workshop held by Thinking Machines Corporation as early as 1992 that investigated the question “How We Will Build a Machine that Thinks”. Perhaps surprisingly, there was a general agreement that “minds could exist on biological substrates and that and that algorithms are of central importance to the existence of minds.” This could be read in both an optimistic and pessimistic way - since people have been considering an answer to this question since the early 90s, we may be closer than we think or that we’re yet to find a solution despite the amount of times might make this a fool’s errand. He cites Moravec’s estimate “that we are 10 to 40 years away from hardware parity” in 1988 and that other parties (such as Roger Penrose and John Searle) that were more skeptical of this idea. Because of this, he posits what would be the case if the Singularity could not be achieved, that hardware performance curves would level off in the early 00s.
Current Limitations
As of 2023 there are still hardware limitations, such as heat dissipation and quantum effects, that have been worked around by shifting towards parallel processing and multi-core processing designs. There has also been a focus on specialisation vs generalisation in terms of hardware - GPUs which were primarily designed for graphics processing have been repurposed for AI and machine learning for their efficiency at parallel processing. TPUs (tensor processing units) have also been developed for neural network computations. Quantum computing is still in its nascent stage, and the focus has broadened - especially in terms of considering energy efficiency and sustainability.
He accurately describes that if there were a possibility of a singularity occurring, that despite warnings and threats that would arise, progress towards the goal would continue - like the arms race we see towards gaining more compute and building more capable LLMs [looking back at this, I'd say agents]:
We’re definitely seeing the case of this, but also of co-operation and willingness to build bridges between companies and countries in order to keep AI safe, aligned and contained. As LLMs grow within major corporations with access to huge amounts of assets, data and compute, I wonder if the world will be populated with specific Minds like in Iain M Banks’ Culture Series.
Vinge also touches upon the concept of superhuman intelligence, specifically arguing against Eric Drexler’s idea of, if these entities were available, that confining it physically or with laws would be sufficient. Vinge argues that confinement is “intrinsically impractical”, as there would be little reason as to why an entity that thinks a million times faster than us would stay refined without a way of escaping. For a fleshed out example about how this could happen, Max Tegmark in the title Life 3.0 outlines an in-depth case study about a hypothetical superintelligence and the multiple scenarios that could occur. Vinge explains that this is a form of “weak superhumanity”, as imagining such a concept would most likely involve more than the simple cranking up of clock speed when it comes to the brain.
He then references the famous 3 laws of Isaac Asimov when it comes to robot or AI alignment for human protection, saying that the “Asimov dream is a wonderful one”, and one I would agree with of course, if it is done properly. It’s simple enough a system to work within a fictional framework, but for the complex nuanced world and societies in which we live in today, there would be many scenarios that would escape its safety net.
The Singularity Gone Wild
Vinge’s concept of a unrestrained Singularity is a rather harrowing one; that even though the physical extinction of the human race is one possibility, it may not be the scariest. In fact, the analogy that he uses has been used many times in the present day - in that superhuman intelligence could treat us in a similar way to how we have treated animals and nature in general: “again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet....” He then lists possible ways in which human intelligence or equivalent automation would still be relevant - “embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients.” He also posits the idea of a Society of Mind, similar to that of a hivemind - a large scale extrapolation of how neurons fire and contribute to the running of a singular human brain. Each component would be as complex as a human mind but running a specific task for the larger entity.
He mentions other paths to the Singularity; explaining that even if we cannot prevent it, we have the freedom to establish initial conditions - which is still an important topic now. He proposes that Intelligence Amplification (IA) would be a more straightforward route to the Singularity, and the logic behind this reasoning seems sound: "Building up from within ourselves ought to be easier than figuring out first what we really are and then building machines that are all of that.”
The concept of Human/Computer teams is the main focus of this segment, focusing on the intuition of humans alongside the efficiency of machines. Vinge gives the examples of chess and art, the latter of which we can argue is happening with generative art. The AI generates images in response to human prompts and subsequent feedback, allowing a collaboration of sorts to create a finished outcome. Some of his points have been achieved, although a lot are still ongoing. For example, his proposition of developing interfaces that humans can use from anywhere has been solved with the plethora of wearables and connected devices. Human/computer networks have been achieved due to collaboration tools like Skype and Zoom.
He also proposes developments in limb prosthetics, nerve to silicon transducers and advanced neural experiments, the latter of which including animal embryo experimentation. Although we have made significant progress in limb prosthetics that can be controlled by neural signals and BCIs (Brain Computer Interfaces), we still have not achieved direct nerve to silicon interfaces. As far as the animal embryo experiments go, ethical considerations make this a cautiously advancing project.
Conclusion
Vinge sums up the essay with the long lasting effects of the Singularity not only if it were to happen, but if we were to tailor it to our wishes. Immortality may be available, or at least a lifetime that rivals the universe’s. How could our minds, still harbouring generations of emotional baggage, handle such an expanding stretch of time? How might this increased bandwidth of networking affect our communication, self-consciousness, the concept of ego? He finishes with this paragraph:
In other words, the structures that we have upheld in society could dissolve with these new stages of intelligence evolution. The boundaries of self, identity and world will be profoundly changed. I feel as though this is happening now, maybe not entirely in the way he imagined, but many of the points he makes (especially in terms of alignment and the race to control such ideas of superhuman intelligence) still rings true today. Although this quote is not specifically about AI, this speech in Childhood’s End by Arthur C Clarke always speaks profoundly to me about the scenario we may find ourselves in:
Every block and in text quotation, apart from the last, will be referenced from this version of Vernor Vinge's essay: The Coming Technological Singularity: How to survive in a post-human era https://cmm.cenart.gob.mx/delanda/textos/tech_sing.pdf [Accessed 30 Sep 2023]
Arthur C. Clarke Childhood's End (Tor: London: 1990), p216/7