Suppose you make a super-intelligent AI and run it on a computer. The computer has NO conventional means of output (no connections to other computers, no screen, etc). Might it still be able to get out / cause harm? I'll post my ideas, and you post yours in the comments.
(This may have been discussed before, but I could not find a dedicated topic)
My ideas:
-manipulate current through its hardware, or better yet, through the power cable (a ready-made antenna) to create electromagnetic waves to access some wireless-equipped device. (I'm no physicist so I don't know if certain frequencies would be hard to do)
-manipulate usage of its hardware (which likely makes small amounts of noise naturally) to approximate human speech, allowing it to communicate with its captors. (This seems even harder than the 1-line AI box scenario)
-manipulate usage of its hardware to create sound or noise to mess with human emotion. (To my understanding tones may affect emotion, but not in any way easily predictable)
-also, manipulating its power use will cause changes in the power company's database. There doesn't seem to be an obvious exploit there, but it IS external communication, for what it's worth.
Let's hear your thoughts! Lastly, as in similar discussions, you probably shouldn't come out of this thinking, "Well, if we can just avoid X, Y, and Z, we're golden!" There are plenty of unknown unknowns here.
I do. It implies that it is actually feasible to construct a text-only channel, which as a programmer I can tell you is not the case.
If you build your AI on an existing OS running on commercial hardware there are going to be countless communication mechanisms and security bugs present for it to take advantage of, and the attack surface of the OS is far too large to secure against even human hackers. The fact that you'll need multiple machines to run it with current hardware amplifies this problem geometrically, and makes the idea that a real project could achieve complete isolation hopelessly naive. In reality you'll discover that there was an undocumented Bluetooth chip on one of the motherboards, or the wireless mouse adapter uses a duel-purpose chip that supports WiFi, or one of the power supplies supports HomePNA and there was another device on the grid, or something else along those lines.
The alternative is building your own (very feature-limited) hardware, to run your own (AI-support-only) OS. In theory you might be able to make such a system secure, but in reality no one is ever going to give you the hundreds of millions of $$ it would cost to build the thing. Not to mention that a project that tries this approach will have to spend years duplicating hardware and software work that has already been done a hundred times before, putting it far behind any less cautious competitors...
Maybe I'm missing something obvious, but why wouldn't physical isolation (a lead-lined bank vault, faraday cage, etc) solve these problems?