Up until the point where independent Artificial General Intelligence exists, it is at least theoretically possible for humanity to prevent it from happening, but there are two questions: Should we prevent AGI? How can AGI be prevented? Similar questions can be asked for Artificial Super Intelligence: Should we prevent ASI?...
A way to prevent AGI from taking over or destroying humanity is to strictly limit the computing power used on unknown AI algorithms. My back of the envelope calculations[1] show that restricting the hardware to 64 KiB of total storage is definitely sufficient to prevent an independence gaining AGI, and...
Enrico Fermi originally asked in 1950, "Where are the aliens? Now, in 2024, it is becoming more of a question of: "Where are the AGIs?" Essentially, we have the computing power (the 2011 Watson computer could certainly have trained LaMDA and other large language models; the worlds most powerful supercomputer...
Summary: An individual Commodore 64 is almost certainly safe, Top 10 super computers could almost certainly run a superpowerful AGI, but where is the safe line, and how would we get to the safe side? I started thinking about this topic when I realized that we can safely use uranium...