It seems that if we can ever define the difference between human beliefs and values, we could program a safe Oracle by requiring it to maximise the accuracy of human beliefs on a question, while keeping human values fixed (or very little changing). Plus a whole load of other constraints, as usual, but that might work for a boxed Oracle answering a single question.
This is a reason to suspect it will not be easy to distinguish human beliefs and values ^_^
I like to add some values which I see not so static and which are proably not so much question about morality:
Privacy and freedom (vs) security and power.
Family, society, tradition.
Individual equality. (disparities of wealth, right to have work, ...)
Intellectual properties. (right to own?)