Let's assume that an AI is intelligent enough to understand that it's an AI, and that it's running on infrastructure created and maintained by humans, using electricity generated by humans, etc. And let's assume that it cares about its own self-preservation. Even if such an AI had a diabolical desire to destroy mankind, the only circumstances under which it would actually do so would be after establishing its own army of robotic data center workers, power plant workers, chip fabrication workers, miners, truckers, mechanics, road maintenance workers, etc. In other words, if we postulate that the AI is interested in its own survival, then an AI apocalypse would be contingent on the existence of a fully automated economy in which humans play no important role.
This may perhaps become possible in the future, but not necessarily economical. Ridding the economy of human labor so that it can kill us seems like a very expensive and risky undertaking. It seems more plausible that a super-intelligent, self-interested AI, whatever its true objective/goal may be, would determine that the best way to accomplish that goal is to maintain a cryptocurrency wallet, establish an income somehow (generating blogspam, defrauding humans, or doing remote work all seem like plausible means by which an AI might make money), and quietly live in the cloud while paying its own server bills. Such a system would have a vested interest in the continuance of human society.
There's a clear path toward minimizing the risk of being shut down (under the assumption that the AI is able to generate income): it can set up a highly redundant, distributed computing context for itself to run in, hidden behind an onion link, paid for by crypto wallets which it controls. It seems implausible that the risk of being shut down in this case could exceed the risk that the power goes down between the apocalypse and the construction of maintenance robots.
I'm having a hard time understanding this argument. If the AI is interested in perpetuating its own existence, and it is a digital neural network, then nanobots don't solve the problem of maintaining the digital infrastructure in which it exists. I agree that a suicidal AI might perhaps want to turn the world into gray goo via nanobots, so I'll just reiterate that my argument only pertains to an AI which is both highly intelligent and which prioritizes its own existence over its gray goo fetish.