Does expected utility maximization destroy complex values?
An expected utility maximizer does calculate the expected utility of various outcomes of alternative actions. It is precommited to choosing the outcome with the largest expected utility. Consequently it is choosing the action that yields the largest expected utility.
But one unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. What if it turns out that one of its complex values can be much more effectively realized and optimized than its other values, i.e. has the best cost-value ratio? That value might turn out to outweigh all other values.
How can this be countered? One possibility seems to be changing one's utility function and reassign utility in such a way as to outweigh that effect. But this will lead to inconsistency. Another way is to discount the value that threatens to outweigh all others. Which will again lead to inconsistency.
This seems to suggest that subscribing to expected utility maximization means that 1.) you swap your complex values for a certain terminal goal with the highest expected utility 2.) your decision-making is eventually dominated by a narrow set of values that are the easiest to realize and promise the most utility.
Can someone please explain how I am wrong or point me to some digestible explanation? Likewise I would be pleased if someone could tell me what mathematical background is required to understand expected utility maximization formally.
Thank you!
So what.
There are much more important things than being rational, at least to me. The world, for one. If all you really want to do is sit at home all day basking in your own rationality, then there's little I can do to argue that you aren't, but I would hope there's more to you than that (if there isn't, feel free to tell me and we can end this discussion).
I'm not sure I can honestly say that I place absolutely no terminal value on rationality, but most of the reason I am pursuing it is its supposed usefulness in achieving everything else.
When we say playing the lottery is stupid, we assume that you don't want to lose money, and when we say mitigating existential risk is rational we assume that you don't want the world to end. Generally humans aren't so very different that these assumptions aren't mostly justified.
Some people take this very approach, they call it 'bounded utility'.
I don't agree with them because it seems to me like along the dimension of human life my utility function really is linear, or at least I would like it to be, but that's just me.
The general principle I'm trying to get at is to find what you actually want, as opposed to what is convenient, mathematically elegant or philosophically defensible, and make that your utility function. If you do this then expected utility should never lead you astray.
What I am trying to fathom is the difference between 1.) assigning utility arbitrarily (no objective grounding) 2.) grounding utility in units of bodily sensations 3.) grounding utility in units of human well-being (i.e. number of conscious beings whose life's are worth living).