Is it possible to prevent AGI?
Up until the point where independent Artificial General Intelligence exists, it is at least theoretically possible for humanity to prevent it from happening, but there are two questions: Should we prevent AGI? How can AGI be prevented? Similar questions can be asked for Artificial Super Intelligence: Should we prevent ASI? How can ASI be prevented? I think the "Should" questions are interesting, [1] but the rest of this post is more on the "How can" questions. PCs can pass Turing Tests If you have not tried running a large language model on an ordinary personal computer in the past three months, I recommend doing so [2] . For almost any computer from the past decade that has at least 4 GiB of RAM, it is possible to run an LLM on it that can pass a non-deception [3] Turing test. For most conversations, the local LLM will come off as at least as smart as a fifth grader, and in many cases as smart or smarter than an average adult. Remember, this is running on your local computer (disconnect the network if you need to prove that to yourself). LLMs are not optimal for many tasks, so this is not the ceiling for what software on an ordinary PC can do. Too many computers can do AGI As the experiment above with an LLM on your own computer may have convinced you, ordinary personal computers act in ways that we considered intelligent, such as holding a conversation, or playing a game of chess or arithmetic (or at least things we used to think required intelligence, until we got computers to do them). There are plenty of estimates that a simulating a human brain is not possible on an ordinary personal computer, but that doesn't really tell us what is required for human equivalent intelligence. As Steve Byrnes pointed out [4] , performing AGI by simulating a human brain’s neurons is similar to multiplying a number by “do[ing] a transistor-by-transistor simulation of a pocket calculator microcontroller chip, which in turn is multiplying the numbers.” Byrnes estimated that AGI