Hacker-AI and Digital Ghosts – Pre-AGI
Software vulnerabilities are mainly found by chance. We all know hacking is labor-intensive. This will invite and thrive the use of AI. It is assumed that it is just a matter of time until attackers create tools with advanced “Hacker-AI” to accelerate the detection and understanding of hard-/software vulnerabilities in complex/unknown technical systems. This post suggests that Hacker-AI will become a dangerous consequential cyberwar weapon - capable of decapitating governments and establishing persistent global supremacy for its operator. Two features amplify the motivation for creating and deploying this Hacker-AI: stealth distribution/deployment as an undetectable digital ghost and that it could be irremovable, i.e., the first Hacker-AI could be the only one. Here we focus solely on significant problems related to attacks or threats. These threats we define as actual or potential harm or damage to humans and/or their property. Threats and attacks are intentional and significant for their victims immediately or in the future. We give every significant problem a headline that catches the gist of the underlying issues so we can refer to these problems easily later. We don’t assume extraordinary abilities from an AGI. The drivers for this development are technically skilled humans/organizations who seek AI tools to accomplish their goals faster and less labor-intensive. Problems/ Vulnerabilities in our IT ecosystem The following 10 problems, issues, or vulnerabilities do not follow a particular order. It is assumed that they all contribute to the danger of Hacker-AI. Preventing Hacker-AI implies confronting these issues technically. “Software is invisible”. Software consists of (compiled) instructions that run directly or via an intermediary layer on CPUs. It’s stored in files invisible in strict practical or operational terms. We can only make it visible indirectly with assumptions on other software, i.e., that their complex interplay of instructions is reliabl