Stuart is it really your implicit axiom that human values are static, fixed?
(Were they fixed historically? Is humankind mature now? Is humankind homogenic in case of values?)
In the space of all possible values, human values have occupied a very small space, with the main change being who gets counted as moral agent (the consequences of small moral changes can be huge, but the changes themselves don't seem large in an absolute sense).
Or, if you prefer, I think it's possible the AI moral value changes will range so widely, that human value can essentially be seen as static in comparison.
It seems that if we can ever define the difference between human beliefs and values, we could program a safe Oracle by requiring it to maximise the accuracy of human beliefs on a question, while keeping human values fixed (or very little changing). Plus a whole load of other constraints, as usual, but that might work for a boxed Oracle answering a single question.
This is a reason to suspect it will not be easy to distinguish human beliefs and values ^_^