Doesn't this depend on the likelihood/prevalence of intelligent life? For all we know, we might be the only sentient species out there.
For the record, that's Michael Anissimov's position (see in particular the last paragraph).
Also, even if there are a lot o unfriendly AIs out there, a friendly one would still vastly improve our fate, whether by fighting off the unfriendly ones, or reaching an agreement with them to mutual benefit, or rescue simulations.
So far as I understand "rescue simulations" in this context, I'd classify them as a particular detail of "a short happy life".
The greater age of the alien ones might give them a massive advantage, but that depends on whether FOOM will ever run into diminishing returns. If it does, then the difference between, say, a 500,000 year old AI and a 2 million year old AI may not be much.
I wouldn't expect there to ever be diminishing returns from acquiring more matter and energy.
Why do we imagine our actions could have consequences for more than a few million years into the future?
Unless what we believe about evolution is wrong, or UFAI is unlikely, or we are very very lucky, we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.
Therefore, justifications for harming people on Earth today in the name of protecting the entire universe over all time from UFAI in the future, like this one, should not be done. Our default assumption should be that the offspring of Earth will at best have a short happy life.
ADDED: If you observe, as many have, that Earth has not yet been assimilated, you can draw one of these conclusions:
Surely, for a Bayesian, the more reasonable conclusion is number 2! Conclusion 1 has priors we can estimate numerically. Conclusion 2 has priors we know very little about.
To say, "I am so confident in my beliefs about what a superintelligent AI will do, that I consider it more likely that I live on an astronomically lucky planet, than that those beliefs are wrong", is something I might come up with if asked to draw a caricature of irrationality.