I agree, it seems a more general way of putting it.
Anyway, now that you mention it I'm intrigued and slightly freaked out by a scenario in which my frame of reference changes without my current values changing. First, is it even knowable when it happens? All our reasoning is based on current values. If an alien race comes and modifies us in a way that our future moral progress changes but not our current values, we could never know the change happened at all. It is a type of value loss that preserves reflective consistency. I mean, we wouldn't agree to be changed to paperclippers but on what basis could we refuse an unspecified change to our moral frame of reference (leaving current values intact)?
I'm not sure I understand this talk of "moral frames of reference" vs simply "values".
But would an analogy to frame change be theory change? As when we replace Newton's theory of gravity with Einstein's theory, leaving the vast majority of theoretical predictions intact?
In this analogy, we might make the change (in theory or moral frame) because we encounter new information (new astronomical or moral facts) that impel the change. Or, we might change for the same reason we might change from the Copenhagen interpretation to MWI - it seems to work just as well, but has greater elegance.
Ben Goertzel:
Robin Hanson:
We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.
Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally "business as usual", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled "evolution" of value (value drift) or recover more of astronomical waste.
Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.