(Gradual) value drift -> Future not optimized -> Future worse than if it's optimized -> Bad
value drift -> Future not optimized to original values -> Future more aligned with new values -> Bad from original viewpoint, better from new viewpoint, not optimal from either viewpoint, but what can you do?
Use of word "optimized" without specifying the value system against which the optimization took place --> variant of mind projection fallacy.
ETA: There is a very real sense in which it is axiomatic both that our value system is superior to the value system of our ancestors and that our values are superior to those of our descendants. This is not at all paradoxical - our values are better simply because they are ours, and therefore of course we see them as superior to anyone else's values.
Where the paradox arises is in jumping from this understanding to the mistaken belief that we ought not to ever change our values.
Bad from original viewpoint, better from new viewpoint, not optimal from either viewpoint, but what can you do?
Where the paradox arises is in jumping from this understanding to the mistaken belief that we ought not to ever change our values.
A compelling moral argument may change our values, but not our moral frame of reference.
The moral frame of reference is like a forking bush of possible future value systems stemming from a current human morality; it represents human morality's ability to modify itself upon hearing moral arguments.
The notion of mora...
Ben Goertzel:
Robin Hanson:
We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.
Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally "business as usual", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled "evolution" of value (value drift) or recover more of astronomical waste.
Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.