All plausible scenarios of AGI disaster involve the AGI gaining access to resources "outside the box." Therefore there are two ways of preventing AGI disaster: one is preventing AGI, which is the "FAI route", and the other is preventing the possibility of rogue AGI gaining control of too many external resources--the "network security route." It seems to me that this network security route--an international initiative to secure networks and computing resources against cyber attacks--is the more realistic solution for preventing AGI disaster. Network security prevents against intentional human-devised attacks as well as the possibility of rogue AGI--therefore such measures are easier to motivate and therefore more likely to be implemented successfully. Also, the development of FAI theory does not prevent the creation of unfriendly AIs. This is not to say that FAI should not be pursued at all, but it can hardly be claimed that development of FAI is of top priority (as it has been stated a few times by users of this site).
Besides what erratio and Vladimir M have said, which I agree with:
Keeping the AI in a box has already been addressed as a bad solution by Eliezer, but your post indicated no awareness of that. There is no point in posting to LessWrong on a subject that has already been covered in depth unless you have something to add.
LessWrong is about rationality, not AGI, and while there are connections between rationality and AGI, you didn't make any.