Does expected utility maximization destroy complex values?
An expected utility maximizer does calculate the expected utility of various outcomes of alternative actions. It is precommited to choosing the outcome with the largest expected utility. Consequently it is choosing the action that yields the largest expected utility.
But one unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. What if it turns out that one of its complex values can be much more effectively realized and optimized than its other values, i.e. has the best cost-value ratio? That value might turn out to outweigh all other values.
How can this be countered? One possibility seems to be changing one's utility function and reassign utility in such a way as to outweigh that effect. But this will lead to inconsistency. Another way is to discount the value that threatens to outweigh all others. Which will again lead to inconsistency.
This seems to suggest that subscribing to expected utility maximization means that 1.) you swap your complex values for a certain terminal goal with the highest expected utility 2.) your decision-making is eventually dominated by a narrow set of values that are the easiest to realize and promise the most utility.
Can someone please explain how I am wrong or point me to some digestible explanation? Likewise I would be pleased if someone could tell me what mathematical background is required to understand expected utility maximization formally.
Thank you!
The model people seem to have when making your argument is that utility has to be a linear function of our values: e.g. if I value pleasure, kittens, and mathematical knowledge, the way to express that in a utility function is something like 100 x pleasure + 50 x kittens + 25 x math. Obviously, if you then discover that you get a kitten for $1, but a pleasure costs $10 and a math costs $20, you'd just keep maximizing kittens forever to the exclusion of everything else, which is a problem.
Usually (outside the context of utility functions, even) the way we formulate the sentiment that each one of these matters is by taking products: e.g. pleasure^3 x kittens^2 x math (exponents allow us to weight the different values). In this case, while in the short term we might discover that kittens are the most cost-efficient way to higher utility, this does not continue to hold. If we have 100 kittens and only 2 maths, 1 additional kitten increases utility by about 2% while 1 additional math increases utility by 50%.
I agree with your first paragraph, but I think your second is just making the same mistake again. Why should our utility function be a product any more than it should be a sum? Why should it be mathematically elegant at all when nothing else about humans is?