FWIW - I suspect it violates causality under nearly everyone's standards.
Oh god damn it, Lesswrong is responsible for every single premise of my argument. I'm just the first to make it!
As for the rest of your post: I have to admit I did not consider this, but I still don't see why they wouldn't just create a less complex physical universe for the simulation.
Or maybe I'm misunderstanding you. My brain is feeling more than usually fried at the moment.
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?