Does expected utility maximization destroy complex values?
An expected utility maximizer does calculate the expected utility of various outcomes of alternative actions. It is precommited to choosing the outcome with the largest expected utility. Consequently it is choosing the action that yields the largest expected utility.
But one unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. What if it turns out that one of its complex values can be much more effectively realized and optimized than its other values, i.e. has the best cost-value ratio? That value might turn out to outweigh all other values.
How can this be countered? One possibility seems to be changing one's utility function and reassign utility in such a way as to outweigh that effect. But this will lead to inconsistency. Another way is to discount the value that threatens to outweigh all others. Which will again lead to inconsistency.
This seems to suggest that subscribing to expected utility maximization means that 1.) you swap your complex values for a certain terminal goal with the highest expected utility 2.) your decision-making is eventually dominated by a narrow set of values that are the easiest to realize and promise the most utility.
Can someone please explain how I am wrong or point me to some digestible explanation? Likewise I would be pleased if someone could tell me what mathematical background is required to understand expected utility maximization formally.
Thank you!
What about the human desire for positive bodily sensations? Given what we currently know about physics, it should be much more efficient to cause them unconditionally than to realize them as a result of some actual achievement. Humans value such fictitious sensations, see movies or daydreams. So the value of such sensations is non-negligible. If we can create them effectively enough to outweigh the utility we assign to their natural realization, then isn't it rational to choose to indulge into unconditional satisfaction?
If only one of your values can be realized an unlimited times, then it only needs to yield one unit of utility per realization to outweigh all other values, as long as its realization is cost effective enough. Because as far as I know, the utility from realizing that one value is no different than the utility you can earn from any of your values, all that counts is the amount of utility you expect.
I do understand your argument, but I just explained why this need not be the case. My utility function does not have to assign a constant value to pleasant fictitious experiences. It does not need to explicitly assign any value to PFEs, only to outcomes. It may be possible to deduce from these outcomes a single unique value assigned to PFEs, but there's no reason why this has to be the case.
For instance, maybe my value for PFEs can't be realized an unlimited number of times because the more PFEs I have and the less real experiences I have the more value re... (read more)