In the Wiki article on complexity of value, Eliezer wrote:
The thesis that human values have high Kolmogorov complexity - our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules.
[...]
Thou Art Godshatter describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple.
But in light of Yvain's recent series of posts (i.e., if we consider our "actual" values to be the values we would endorse in reflective equilibrium, instead of our current apparent values), I don't see any particular reason, whether from evolutionary psychology or elsewhere, that they must be complex either. Most of our apparent values (which admittedly are complex) could easily be mere behavior, which we would discard after sufficient reflection.
For those who might wish to defend the complexity-of-value thesis, what reasons do you have for thinking that human value is complex? Is it from an intuition that we should translate as many of our behaviors into preferences as possible? If other people do not have a similar intuition, or perhaps even have a strong intuition that values should be simple (and therefore would be more willing to discard things that are on the fuzzy border between behaviors and values), could they think that their values are simple, without being wrong?
Retreating further along the line of Eliezer's reasoning to find the point where you start to disagree: how about AIs that don't take over the world? For example, I want an AI that I can ask for a cheeseburger, and it will produce a cheeseburger for me while respecting my implied wishes to not burn the world with molecular nanotech or kill the neighbor's dog for meat. Do you agree that such a device needs to have lots of specific knowledge about humans, and not just about cheeseburgers? If yes, then how is the goal of solving the world's problems (saving kids in Africa, stopping unfriendly AIs, etc) relevantly different from the goal of making a cheeseburger?
Cousin_it and I had an offline chat. To recap my arguments: