All plausible scenarios of AGI disaster involve the AGI gaining access to resources "outside the box." Therefore there are two ways of preventing AGI disaster: one is preventing AGI, which is the "FAI route", and the other is preventing the possibility of rogue AGI gaining control of too many external resources--the "network security route." It seems to me that this network security route--an international initiative to secure networks and computing resources against cyber attacks--is the more realistic solution for preventing AGI disaster. Network security prevents against intentional human-devised attacks as well as the possibility of rogue AGI--therefore such measures are easier to motivate and therefore more likely to be implemented successfully. Also, the development of FAI theory does not prevent the creation of unfriendly AIs. This is not to say that FAI should not be pursued at all, but it can hardly be claimed that development of FAI is of top priority (as it has been stated a few times by users of this site).
Yes, well so is creating a friendly AI.
Now, shut up and do the impossible