Squark comments on Fake Utility Functions - Less Wrong

22 Post author: Eliezer_Yudkowsky 06 December 2007 04:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Squark 17 March 2013 05:26:49PM 1 point [-]

We have every reason to believe that insofar as humans can said to be have values, there are lots of them - high Kolmogorov complexity. A human brain implements a thousand shards of desire

However it is no a reason why we shouldn't use low Kolmogorov complexity as a criterion in evaluating normative models (utility functions / moral philosophies), just as it serves a criterion for evaluating descriptive models of the universe

There are two possible points of view. One point of view is that each of us has a "hardcoded" immutable utility function. In this case arguments about moral values appear meaningless since a person is unable to change her own values. Another point of view is that our utility function is reprogrammable, in some sense. In this case there is a space of admissible utility functions: the moral systems you can bring yourself to believe. I suggest using Occam's razor (Kolmogorov complexity minimization) as a fundamental principle of selecting the "correct" normative model from the admissible space

A possible model of the admissible space is as follows. Suppose human psychology contains a number of different value systems ("shades of desire") each with its own utility function. However the weights used for combining these functions are reprogrammable. It might be that in reality they are not even combined with well-defined weights, i.e. in a way preserving the VNM axioms. In this case the "correct" normative model results from choosing weight functions such that the resulting convex linear combination is of minimal Kolmogorov complexity