The number of particles in the universe may limit AI from figuring out BB(6), but it limits the humans, too.
Whether we can shut down the AI by telling it "hey, you can't calculate BB(6), why don't you kill yourself?" probably depends on the specific architecture, but seems to me that the AI probably just won't care.
I didn't mean it to be so simplistic. I am just considering that if there is a known limitation of AI, no matter how powerful it is, that could be used as the basis of a system an AI could not circumvent. For example, if there was a shutdown system where the only way to disable it would require solving the halting problem.
If you knew how to build such shutdown system, you could probably also build one that cannot be disabled at all (e.g. would require solving a literally impossible problem, like proving that 1 = 0).
Some problems of mathematics like the Halting Problem and the Busy Beaver Problem are uncomputable, meaning that it is mathematically proven that any Turing-complete computer is physically incapable of solving the problem no matter how sophisticated its hardware or software is. Some algorithms on a Turing machine can be used to solve special cases of these problems, but the general case is known to be uncomputable. In the case of Busy Beavers, a computer program could check each combination of programs one-by-one, but this wouldn't work for BB(6) or higher because that is believed to be larger than the number of particles in the Universe.
Wikipedia has a full list of Undecidable problems.
To what extent does this suggest that a GAI will be limited in its capabilities, even in the most optimistic scenario of AI "takeoff"? Could this present an exploitable weakness that could be used to keep an uncooperative AI contained or shut down?