I think the brain power required to design a simulation of our universe complex enough to include the same problems in it that we want our AI to solve is higher than the brain power of the AI we are having solve the problem.
It doesn't have to be -our- universe. For instance rule 110 is turing-complete and could therefore act as the containment universe for an AI.
Many people think you can solve the Friendly AI problem just by writing certain failsafe rules into the superintelligent machine's programming, like Asimov's Three Laws of Robotics. I thought the rebuttal to this was in "Basic AI Drives" or one of Yudkowsky's major articles, but after skimming them, I haven't found it. Where are the arguments concerning this suggestion?