In the Wiki article on complexity of value, Eliezer wrote:
The thesis that human values have high Kolmogorov complexity - our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules.
[...]
Thou Art Godshatter describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple.
But in light of Yvain's recent series of posts (i.e., if we consider our "actual" values to be the values we would endorse in reflective equilibrium, instead of our current apparent values), I don't see any particular reason, whether from evolutionary psychology or elsewhere, that they must be complex either. Most of our apparent values (which admittedly are complex) could easily be mere behavior, which we would discard after sufficient reflection.
For those who might wish to defend the complexity-of-value thesis, what reasons do you have for thinking that human value is complex? Is it from an intuition that we should translate as many of our behaviors into preferences as possible? If other people do not have a similar intuition, or perhaps even have a strong intuition that values should be simple (and therefore would be more willing to discard things that are on the fuzzy border between behaviors and values), could they think that their values are simple, without being wrong?
If I think the correct answer to our confusion will ultimately turn out to be something complex (in the sense of Godshatter-like), then I can rule out any plans that eventually call for hard coding such an answer into an AI. This seems to be Eliezer's argument (or one of his main arguments) for implementing CEV.
On the other hand, if I think the correct answer may turn out to be simple, even if I don't know what it is now, then there's a chance I can find out the answer directly in the next few decades and then hard code that answer into an AI. Something like CEV is no longer the obvious best approach.
(Personally I still prefer a "meta-ethical" or "meta-philosophical" approach, but we'd need a different argument for it besides "thou art godshatter"/"complexity of value".)