Very comprehensive analysis by Brian Tomasik on whether (and to what extent) the simulation argument should change our altruistic priorities. He concludes that the possibility of ancestor simulations somewhat increases the comparative importance of short-term helping relative to focusing on shaping the "far future".
Another important takeaway:
[...] rather than answering the question “Do I live in a simulation or not?,” a perhaps better way to think about it (in line with Stuart Armstrong's anthropic decision theory) is “Given that I’m deciding for all subjectively indistinguishable copies of myself, what fraction of my copies lives in a simulation and how many total copies are there?"
That only applies if the simulation is implemented using actual quantum mechanics to give the appearance of quantum mechanics, AND our brains are implemented using this actual quantum mechanics.
Even in that case, if the simulation can be suspended and the brains measured with a noise floor well below thermal noise, then the copies can be good enough that no experiment from inside the simulation could ever detect the copy event occurring, and arbitrarily many copies can be made without further degradation.