Does expected utility maximization destroy complex values?
An expected utility maximizer does calculate the expected utility of various outcomes of alternative actions. It is precommited to choosing the outcome with the largest expected utility. Consequently it is choosing the action that yields the largest expected utility.
But one unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. What if it turns out that one of its complex values can be much more effectively realized and optimized than its other values, i.e. has the best cost-value ratio? That value might turn out to outweigh all other values.
How can this be countered? One possibility seems to be changing one's utility function and reassign utility in such a way as to outweigh that effect. But this will lead to inconsistency. Another way is to discount the value that threatens to outweigh all others. Which will again lead to inconsistency.
This seems to suggest that subscribing to expected utility maximization means that 1.) you swap your complex values for a certain terminal goal with the highest expected utility 2.) your decision-making is eventually dominated by a narrow set of values that are the easiest to realize and promise the most utility.
Can someone please explain how I am wrong or point me to some digestible explanation? Likewise I would be pleased if someone could tell me what mathematical background is required to understand expected utility maximization formally.
Thank you!
If the values are faithfully noted this is no problem. It strictly is what you'd want.
I think the danger is that the utility maximizer optimizes for (possibly only slightly) wrong values. The complex value space might accidentally admit some such optimization because of e.g. leaving out some obscure value with was previously overlooked.
A utility optimizer shoud not optimize faster than errors in the value function can be found i.e. faster than humans can give collective feedback on it.
That is one reason companies (mentioned in your comment) sometimes produce goods that are in high demand until the secondary effects are noticed which then may take some time to fix e.g. by legislation.
I don't think the speed of mere companes need to be slowed down to fix this (altough one might consider and model this). But more powerful utility maximizers definitely should time smooth their optimization process.