If you are trying to calculate the value of a choice using a bounded utility function, how can you be sure whether you are close or far from the bound, whatever the bound is? How do you account for uncertainty about how much utility you already have? Does this question actually make sense?
Recently I have come across arguments against using a bounded utility function to avoid Pascal’s Mugging and similar “fanaticism” problems. These arguments, such as Section 6 of Hayden Wilkinson’s paper “In Defense of Fanaticism” and the Less Wrong post “Pascal's Mugging for bounded utility functions” both use a novel argument against bounded utility functions. If I understand them correctly, they argue that bounded utility functions cannot work because it is impossible to know how much utility one already has. This means one cannot know how close to the bound their utility is, and therefore one can never know how much to discount future utility by.
Wilkinson’s paper uses the example of someone with an altruistic bounded utility function that is essentially total utilitarianism. So they want to increase the total utility of the universe and, because they have a bounded utility function, the value of additional total utility decreases as it approaches some upper bound. If I understand his argument correctly, he is saying that because this agent has a bounded utility function, they cannot calculate how good an action is without knowing lots of details about past events that their actions cannot effect. Otherwise, how will they know how close they are to the upper bound?
Wilkinson analogizes this to the “Egyptology” objection to average utilitarianism, where an average utilitarian is compelled to study how happy the Ancient Egyptians were before having children. Otherwise, they cannot know if having children increases or decreases average utility. Similarly, Wilkinson argues that a total utilitarian with a bounded utility function is compelled to study Ancient Egypt in order to know how close to the bound the total utility of the world is. This seems implausible, even if information about Ancient Egypt was easy to come by, it seems counterintuitive that it is relevant to what you should do today.
“Pascal's Mugging for bounded utility functions” by Benya introduces a related problem. In this scenario, a person with a bounded utility function has lived an immensely long time in a vast utopia. Because of this, their utility level is very close to the upper bound of their bounded utility function. Pascal’s Mugger approaches them and tells them that all their memories of this utopia are fake and that they have lived for a much shorter time than they believed they had. The mugger then offers to massively extend their lifespan for $5. The idea is that by creating uncertainty about whether their utility is approaching the bound or not, the mugger can get around the bounded utility function that normally protects from mugging.
One way around this dilemma that seems attractive to me is to use some version of Marc Colyvan’s Relative Expected Value theory. This theory, when looking at two options, compares the differences in utility, rather than the total utility of each option. This would seem to defeat the Egyptology objection, if you cannot change how much utility the events in Ancient Egypt were worth, then you don’t factor them into your calculations when considering how close you are to the bound. Similarly, when facing Pascal’s Mugger in the far future, the person does not need to include all their past utility when considering how to respond to the mugger. There may be other approaches like this that discount utility that is unaffected in either choice, I am not sure what the best formulation would be.
However, I am worried that this approach might result in problems with transitivity, or change the ranking of values based on how they are bundled. For example, if an agent with a bounded utility function using Relative Expected Value theory was given offers to play a lottery for $x 1,000 times they might take it each time. However, they might not pay a thousand times as much to enter a lottery for $1,000x. Am I mistaken, or is there a way to calibrate or refine this theory to avoid this transitivity problem?
I would love it if someone had an ideas on this topic. I am very confused and do not know if this is a serious problem or if I am just missing something important about how expected utility theory works.
TLDR: What I really want to know is:
1. Is an agent with a bounded utility function justified (because of their bounded function) in rejecting any "Pascal's Mugging" type scenario with tiny probabilities of vast utilities, regardless of how much utility or disutility they happen to "have" at the moment? Does everything just rescale so that the Mugging is an equally bad deal no matter what the relative scale of future utility is?
2. If you have a bounded utility function, are your choices going to be the same regardless of how much utility various unchangeable events in the past generated for you? Does everything just rescale when you gain or lose a lot of utility so that the relative value of everything is the same? I expect the answer is going to be "yes" based on our previous discussion, but am a little uncertain because of the various confused thoughts on the subject that I have been having lately.
Full length Comment:
I don't think I explained my issue clearly. Those arguments about Pascal's Mugging are addressing it from the perspective of its unlikeliness, rather than using a bounded utility function against it.
I am trying to understand bounded utility functions and I think I am still very confused. What I am confused about right now is how a bounded utility function protects from Pascal's Mugging at different "points" along the function.
Imagine we have a bounded utility function that has a "S" curve shape. The function goes up and down from 0 and flattens as it approaches the upper and lower bounds.
If someone has utility at around 0, I see how they resist Pascal's Mugging. Regardless of whether the Mugging is a threat or a reward, it approaches their upper or lower bound and then diminishes. So utility can never "outrace" probability.
But what if they have a level of utility that is close to the upper bound and a Mugger offers a horrible threat? If the Mugger offered a threat that would reduce their utility to 0, would they respond differently than they would to one that would send it all the way to the lower bound? Would the threat get worse as the utility being cancelled out by the disutility got further from the bound and closer to 0? Or is the idea that in order for a threat/reward to qualify as a Pascal's Mugging it has to be so huge that it goes all the way down to a bound?
And if someone has a level of utility or disutility close to the bound, does that mean disutility matters more so they become a negative utilitarian close to the upper bound and a positive utilitarian close to the lower one? I don't think that is the case, I think that, as you said, "the relative scale of future utility makes no difference in short-term decisions." But I am confused about how.
I think I am probably just very confused in general about utility functions and about bounded utility functions. While some people have criticized bounded utility functions, I have never come across this specific type of criticism before. It seems far more likely that I am confused than that I am the first person to notice an obvious flaw.