Red Queen hypothesis means that humans are probably the latest step in a long sequence of fast (on evolutionary time scale) value changes. So does Coherent Extrapolated Volition (CEV) intend to
1) extrapolate all the future co-evolutionary battles humans would have and predict the values of the terminal species as our CEV, or is it intended somehow to
2) freeze the values humans have at the point in time we develop FAI and build a cocoon around humanity which will let it keep this (nearly) arbitrarily picked point in its evolution forever?
If it is 1), it seems the AI doesn't have much of a job to do. Presumably interfere against existential risks to humanity and its successor species, perhaps keep extremely reliable stocks for repopulating if humanity or its successor manages still to kill itself. Maybe even in a less extreme interpretation, FAI does what is required to keep humanity and its successors as the pinnacle species, stealing adaptations from unrelated species that actually manage to threaten us and our successors, so we sort of have 1') which is extrapolate to a future where the pinnacle species is always a descendant of ours.
If 2), it would seem FAI could simply build a sim that freezes in place the evolutionary pressures that brought us to this point as well as freezing in to place our own current state. And then run that sim forever, the sim simply removes genetic mutation from the sim and perhaps has active rebalancing to work against any natural selection which is currently going on.
We could have BOTH futures, those who prefer 2) go live in the Sim that they have always thought was indistinguishable from reality anyway, and those who prefer 1 stay here in the real world and play out their part in evolving whatever comes next. Indeed, the sim of 2) might serve as a form of storage/insurance against existential threats, a source from which human history can be restarted from its point at 0 year FAI whenever needed.
Does CEV crash in to Red Queen hypothesis in interesting ways? Could a human value be to roll the dice on our own values in hopes of developing an even more effective species?
Neither. CEV is supposed to look at what humanity would want if they were smarter, faster, and more the people they wished they were. It finds the end of the evolution of how we change if we are controlled by ourselves, not by the blind idiot god.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.