In the Wiki article on complexity of value, Eliezer wrote:
The thesis that human values have high Kolmogorov complexity - our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules.
[...]
Thou Art Godshatter describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple.
But in light of Yvain's recent series of posts (i.e., if we consider our "actual" values to be the values we would endorse in reflective equilibrium, instead of our current apparent values), I don't see any particular reason, whether from evolutionary psychology or elsewhere, that they must be complex either. Most of our apparent values (which admittedly are complex) could easily be mere behavior, which we would discard after sufficient reflection.
For those who might wish to defend the complexity-of-value thesis, what reasons do you have for thinking that human value is complex? Is it from an intuition that we should translate as many of our behaviors into preferences as possible? If other people do not have a similar intuition, or perhaps even have a strong intuition that values should be simple (and therefore would be more willing to discard things that are on the fuzzy border between behaviors and values), could they think that their values are simple, without being wrong?
I meant that there's been little progress in the sense of generating theories precise enough to offer concrete recommendations, things that might be coded into an AI, e.g. formal criteria for identifying preferences, pains, and pleasures in the world (beyond pointing to existing humans and animals, which doesn't pin down the content of utilitronium), or a clear way to value different infinitely vast worlds (with all the rearrangement issues discussed in Bostrom's "Infinitarian Challenge" paper). This isn't just a matter of persistent moral disagreement, but a lack of any comprehensive candidates that actually tell you what to do in particular situations rather than having massive lacunae that are filled in by consideration of individual cases and local intuitions.
This seems to me more about the "C" than the "EV." I think such a utilitarian should still be strongly concerned with having at least their reflective equilibrium extrapolated. Even a little uncertainty about many dimensions means probably going wrong, and it seems that reasonable uncertainty about several of these things (e.g. infinite worlds and implications for probability and ethics) is in fact large.
One could argue that until recently there has been little motivation amongst utilitarians to formulate such precise theories, so you can't really count all of the past 60 years as evidence against this being ... (read more)