Does expected utility maximization destroy complex values?
An expected utility maximizer does calculate the expected utility of various outcomes of alternative actions. It is precommited to choosing the outcome with the largest expected utility. Consequently it is choosing the action that yields the largest expected utility.
But one unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. What if it turns out that one of its complex values can be much more effectively realized and optimized than its other values, i.e. has the best cost-value ratio? That value might turn out to outweigh all other values.
How can this be countered? One possibility seems to be changing one's utility function and reassign utility in such a way as to outweigh that effect. But this will lead to inconsistency. Another way is to discount the value that threatens to outweigh all others. Which will again lead to inconsistency.
This seems to suggest that subscribing to expected utility maximization means that 1.) you swap your complex values for a certain terminal goal with the highest expected utility 2.) your decision-making is eventually dominated by a narrow set of values that are the easiest to realize and promise the most utility.
Can someone please explain how I am wrong or point me to some digestible explanation? Likewise I would be pleased if someone could tell me what mathematical background is required to understand expected utility maximization formally.
Thank you!
Yes, but take for example companies. Companies are economic entities that resemble rational utility maximizer's much better than humans. Most companies specialize on producing one product or a narrow set of products. How can this be explained given that companies are controlled by humans for humans? It seems that adopting profit maximization leads to specialization which leads to simplistic values.
The large plethora of values mainly seems to be a result of human culture, a meme complex that is the effect of a society of irrational agents. Evolution only equipped us with a few drives. Likewise does rational utility maximization not favor the treatment of rare diseases in cute kittens. Such values are only being pursued because of agents who do not subscribe to rational utility maximization.
Can you imagine a society of perfectly rational utility maximizer's that, among other things, play World of Warcraft, lotteries and save frogs from being overrun by cars?
Specialization doesn't lead to simple values. You trade your extra goods for goods you don't produce.
Also, since it's easier to optimize for simpler values, we should expect to see better maximizers with simple than complex values.