If someone gets cremated or buried long enough for eir brain to fully decompose into dirt, it becomes extremely difficult to revive em. Nothing short of a vastly superhuman intelligence would have a chance of doing it. I suspect that it would be possible for a superintelligence to do it, but unless there's a more efficient way to do it, it would require recomputing the Earth's history from the time the AGI is activated back to the death of the last person it intends to save. Not only does this require immense computational resources that could be used to the benefit of people who are still alive, it also requires simulating people experiencing pain (backwards). On the other hand, this saves people's lives. Does anyone have any compelling arguments on why an FAI would or would not recreate me if I die, decompose, and then the singularity occurs a long time after my death?
Why do I want to know? Well, aside from the question being interesting in its own right, it is an important factor in deciding whether or not cryonics is worth-while.
A possible reason why not besides physical impossibility: are you sure you want to be recreated?---I don't mean to invoke any tired cishumanist tropes about death being the natural order of things or any nonsense like that. But it seems plausible to me that a benevolent superintelligence would have no trouble contriving configurations of matter that at least some of us would prefer to personal survival; specifically, the AI could create people better than us by our own standards.
Presumably you don't want to live on exactly as you are now; you have a desire for self-improvement. But given the impossibility of ontologically fundamental mental entities, there's an identity between "self-improvement" and "being deleted and immediately replaced with someone else who happens to be very similar in these-and-such ways"; once you know what experiences are taking place at what points in spacetime, there's no further fact of the matter as to which ones of them are "you." In our own world, this is an unimportant philosophical nicety, but superintelligence has the potential to open up previously inaccessible edge cases, such as extremely desirable outcomes that don't preserve enough personal identity to count as "survival" in the conventional sense.