I doubt human value is particularly fragile. Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology. I think it's fairly robust.
Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors. The risks of attempting a world government anytime soon to prevent this outcome seem worse overall.
We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.
Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally "business as usual", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled "evolution" of value (value drift) or recover more of astronomical waste.
Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.
A compelling moral argument may change our values, but not our moral frame of reference.
The moral frame of reference is like a forking bush of possible future value systems stemming from a current human morality; it represents human morality's ability to modify itself upon hearing moral arguments.
The notion of moral argument and moral progress is meaningful within my moral frame of reference, but not meaningful relative to a paperclipper utility function. A paperclipper will not ever switch to stapler maximization on any moral argument; a consistent paperclipper does not think that it will possibly modify its utility function upon acquiring new information. In contrast, I think that I will possibly modify my morality for the better, it's just that I don't yet know the argument that will compel me, because if I knew it I would have already changed my mind.
It is not impossible that paperclipping is the endpoint to all moral progress, and there exists a perfectly compelling chain of reasoning that converts all humans to paperclippers. It is "just" vanishingly unlikely. We cannot, of course, observe our moral frame of reference from an outside omniscient vantage point but we're able to muse about it.
If we do assume omniscience for a second, then there is a space of values that humans would never willingly modify themselves into. Value drift means drifting into such space rather than a modification of values in general.
If our ancestors and our descendant are in the same moral frame of reference then you could possibly convert your ancestors or most of your ancestors to your morality and be converted to future morality by future people. Of course it is not easy to say which means of conversion are valid; on the most basic level I'd say that rearranging your brains' atoms to a paperclipper breaks out of the frame of reference while verbal education and arguments generally don't.
Rather (in your terminology), value drift is change in the moral frame of reference, even if (current instrumental) morality stays the same.