I doubt human value is particularly fragile. Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology. I think it's fairly robust.
Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors. The risks of attempting a world government anytime soon to prevent this outcome seem worse overall.
We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.
Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally "business as usual", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled "evolution" of value (value drift) or recover more of astronomical waste.
Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.
I still find it shocking and terrifying every time someone compares the morphing of human values with the death of the universe. Even though I saw another FAI-inspired person do it yesterday.
If all intelligent life held your view about the importance of their own values, then life in the universe would be doomed. The outcome of that view is that intelligent life greatly increases its acceptable ratio of (risk of destroying all life) / (chance of preserving its value system). (This is especially a problem when there are multiple intelligent beings with value systems that differ, as there already are.) The fragility of life in the long term means we can't afford that luxury. It will be hard enough to avoid the death of the universe even if we all cooperate.
Publicly stating the view that you cannot value the existence of anything but agents implementing your own values (even after your death), makes cooperation very difficult. It's easier to cooperate with someone who is willing to compromise.
Someone will complain that I can't value anything but my own values; that's a logical impossibility. But notice that I never said I value anything but my own values! The trick is that there's a difference between acting to maximize your values, versus going meta, and saying that you must value the presence of your values in others. There is no law of nature saying that a utility function U must place a high value on propagating U to other agents. In fact, there are many known utility functions U that would place a negative value on propagating their values to other agents! And it would be interesting if human utility functions are represented in a manner that is even capable of self-reference.
(The notion that your utility function says you must propagate your utility function only makes sense if you assume you will become a singleton.)
Even if you insist that you must propagate your utility function (or you plan on becoming a singleton), you should be able to reach a reflective equilibrium, and realize that attempting to force your values on the universe will result in a dead universe; and accept a compromise with other value systems. Avoiding that compromise, not saving humanity, is, I think, the most plausible reason for the FAI+CEV program.
But, falling short of reaching that equilibrium, I would like it if all the CEVers would at least stop talking about Human Values as if they were a single atomic package. If you model values as being like genes, and say that you want to transmit your values in the same way that an organism wants to transmit its genes (which it doesn't, by the way; and it is in exactly this way that the Yudkowskian attachment to values makes no sense - insisting that you must propagate your values because that's what your values want is exactly like insisting you must propagate your genes because that's what your genes want; it is forgetting that you are a computational system, and positing a value homunculus who looks at you with your own eyes to figure out what you should do); then that would at least allow for the possibility of value-altruism in a way isomorphic to kin selection (sometimes sacrificing your complete value package in order to preserve a set of related value packages).
I wish I could save up all my downvotes for the year, and apply them all to this post (if I could without thus having to apply them to its author - I don't want to get personal about it); for this is the single most dangerous idea in the LessWrong memespace; the one thing rotten at the core of the entire FAI/CEV project as conceived of here.
Agreed.
Yes, this (value drift -> death of the universe) belief needs to be excised.