Alas, formal methods can't really help with that part. If you have the correct spec, formal methods can help you know with as much certainty as we know how to get that your program implements the spec without failing in undefined ways on weird edge cases. But even experienced, motivated formal methods practitioners sometimes get the spec wrong. I suspect "getting the sign of the reward function" right is part of the spec, where theorem provers don't provide much leverage beyond what a marker and whiteboard (or program and unit tests) give you.
Invoking the kill switch would be costly and painful for the compute provider/AI developer, and I wonder if this would make them slow to pull the trigger. Why not place the kill switch in the regulator's control, along with the expectation that companies could sue the regulator for damages if the kill switch was invoked needlessly?
Edit: Actually I think this is what is meant by "Hardware-Enabled Governance Mechanisms (HEM)", and I think the suggestion that the compute provider or AI developer shut down the model is a stop-gap until HEM is widely deployed.