A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?
Do you accept in theory that, provided MWI is true, one can win a quantum lottery by committing suicide if one does not win? If yes, is that not a similar violation of causality? If no, why not? What's your model of what would happen?
Under MWI, you can win a lottery just by entering it; committing suicide is not necessary. Of course, almost all of you will lose.
All you're doing in quantum lotteries is deciding you really, REALLY don't care about the case where you lose, to the point that you want to not experience those branches at all, to the point that you'd kill yourself if you find yourself stuck in them.
That's the causality involved. You haven't gone out and changed the universe in any way (other than almost certainly killing yourself).