Does expected utility maximization destroy complex values?
An expected utility maximizer does calculate the expected utility of various outcomes of alternative actions. It is precommited to choosing the outcome with the largest expected utility. Consequently it is choosing the action that yields the largest expected utility.
But one unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. What if it turns out that one of its complex values can be much more effectively realized and optimized than its other values, i.e. has the best cost-value ratio? That value might turn out to outweigh all other values.
How can this be countered? One possibility seems to be changing one's utility function and reassign utility in such a way as to outweigh that effect. But this will lead to inconsistency. Another way is to discount the value that threatens to outweigh all others. Which will again lead to inconsistency.
This seems to suggest that subscribing to expected utility maximization means that 1.) you swap your complex values for a certain terminal goal with the highest expected utility 2.) your decision-making is eventually dominated by a narrow set of values that are the easiest to realize and promise the most utility.
Can someone please explain how I am wrong or point me to some digestible explanation? Likewise I would be pleased if someone could tell me what mathematical background is required to understand expected utility maximization formally.
Thank you!
I do understand you as well. But I don't see how some people here seem to be able to make value statements about certain activities, e.g. playing the lottery is stupid. Or it is rational to try to mitigate risks from AI. I am still clueless how this can be justified if utility isn't objectively grounded, e.g. in units of bodily sensation. If I am able to arbitrarily assign utility to world states then I could as well assign utility to universes where I survive the Singularity without doing anything to mitigate it, enough to outweigh any others. In other words, I can do what I want and be rational as long as I am not epistemically confused about the consequences of my actions.
If that is the case, why are there problems like Pascal's mugging or infinite ethics? If utility maximization does not lead to focusing on few values that promise large amounts of utility, then there seem to be no problems. Just because I would save my loved one's doesn't mean that I want to spend the whole day saving infinitely many people.
So what.
There are much more important things than being rational, at least to me. The world, for one. If all you really want to do is sit at home all day basking in your own rationality, then there's little I can do to argue that you aren't, but I would hope there's more to you than that (if there isn't, feel free to tell me and we can end this discussion).
I'm not sure I can honestly say that I place absolutely no termina... (read more)