If an AGI had to create and hack a computer or a human brain, how easy and how fast would they be able to do it so that it does what the AI wants?

New Answer
New Comment

1 Answers sorted by

JBlack

40

I don't really have an answer, just an enormous amount of "it depends on what the scenario is", with a big dollop of "we don't know what we don't know" on top.

We do know that hacking computers is definitely not very hard as a general rule. Sometimes people can do it by accident. Hacking brains is also observationally not very difficult, in the sense that there are quite a few people who can successfully tell other humans to do things even when the things being done are detrimental to those humans.

There are of course limits to both, but also the possibility of greater-than-human capabilities for both.

In the limit, we can be pretty sure that a sufficiently superintelligent AI could devise microscopic machines/bacteria/whatever that could make their way into a human brain and change which neurons fire as well as other effects. It seems almost certain that enough of this would enable complete control over the person including their thoughts and feelings. The same sort of thing could be used to hijack memory buses and so on in computer hardware.

That's a pretty scary scenario, but it does depend upon weasel phrases like "sufficiently superintelligent", and also questions about whether these things could be prevented or at least detected during manufacture and deployment.

There are even more speculative possibilities such as crafted images, sounds, or other sensory inputs that can bypass or take over various brain functions. We don't know, and may never know. It doesn't seem like the sort of thing that could be deduced from current human knowledge or first principles, so if this sort of thing can exist then it would likely require substantial experimentation by an AI before it could be designed.

For computers, these sorts of hacks are routine. Many software and sometimes hardware flaws allow data inputs to execute arbitrary programs on the computer. There are also plenty of ways to stealthily gather information about whether a given system has any of the flaws you know about. It seems likely that nearly every general purpose computer in use has thousands of such flaws that we have not yet discovered and fixed. New software with new flaws is coming out daily. An AI that can think faster and better about computer code could in principle discover them in much shorter periods of time than any human response.

But again, we can't really know. There are too many possible scenarios. However, we should expect that an AI that thinks faster and better than us, and attempts to hack us or our computers will find ways to do it that we not only haven't thought of, but that we literally cannot imagine.