I don't see why simulated observers would almost ever outnumber physical observers. It would need an incredibly inefficient allocation of resources.
The question isn't how many simulated observers exist in total (although that's also unknown), but how many of them are like you in some relevant sense, i.e. what to consider "typical".
Avoiding the DA gives them a much clearer motive. It's the only reason I can think of that I would want to do it. Surely it's at least worth considering?
Many people do think they would have other reasons to run ancestor simulations.
But in any case, I don't think your original idea works. Running a simulation of your ancestors causes your simulated ancestors to be wrong about the DA, but it doesn't cause yourself to be wrong about it.
Trying to steelman, what you'd need is to run simulations of people successfully launching a friendly self-modifying AI. Suppose out of every N civs that run an AI, on average one succeeds and all the others go extinct. If each of them precommits to simulating N civs, and the simulations are arranged so that in a simulation running an AI always works, so in the end there are still N civs that successfully ran an AI.
This implies a certain measure on future outcomes: it's counting "distinct" existences while ignoring the actual measure of future probability. This is structurally similar to quantum suicide or quantum roulette.
The question isn't how many simulated observers exist in total (although that's also unknown), but how many of them are like you in some relevant sense, i.e. what to consider "typical".
I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?
But in any case, I don't think your original idea works. Running a simulation of your ancestors causes your simulated ancestors to be wrong about the DA, but it doesn't cause yourself to be wrong about it.
The w...
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?