If you are trying to calculate the value of a choice using a bounded utility function, how can you be sure whether you are close or far from the bound, whatever the bound is? How do you account for uncertainty about how much utility you already have? Does this question actually make sense?
Recently I have come across arguments against using a bounded utility function to avoid Pascal’s Mugging and similar “fanaticism” problems. These arguments, such as Section 6 of Hayden Wilkinson’s paper “In Defense of Fanaticism” and the Less Wrong post “Pascal's Mugging for bounded utility functions” both use a novel argument against bounded utility functions. If I understand them correctly, they argue that bounded utility functions cannot work because it is impossible to know how much utility one already has. This means one cannot know how close to the bound their utility is, and therefore one can never know how much to discount future utility by.
Wilkinson’s paper uses the example of someone with an altruistic bounded utility function that is essentially total utilitarianism. So they want to increase the total utility of the universe and, because they have a bounded utility function, the value of additional total utility decreases as it approaches some upper bound. If I understand his argument correctly, he is saying that because this agent has a bounded utility function, they cannot calculate how good an action is without knowing lots of details about past events that their actions cannot effect. Otherwise, how will they know how close they are to the upper bound?
Wilkinson analogizes this to the “Egyptology” objection to average utilitarianism, where an average utilitarian is compelled to study how happy the Ancient Egyptians were before having children. Otherwise, they cannot know if having children increases or decreases average utility. Similarly, Wilkinson argues that a total utilitarian with a bounded utility function is compelled to study Ancient Egypt in order to know how close to the bound the total utility of the world is. This seems implausible, even if information about Ancient Egypt was easy to come by, it seems counterintuitive that it is relevant to what you should do today.
“Pascal's Mugging for bounded utility functions” by Benya introduces a related problem. In this scenario, a person with a bounded utility function has lived an immensely long time in a vast utopia. Because of this, their utility level is very close to the upper bound of their bounded utility function. Pascal’s Mugger approaches them and tells them that all their memories of this utopia are fake and that they have lived for a much shorter time than they believed they had. The mugger then offers to massively extend their lifespan for $5. The idea is that by creating uncertainty about whether their utility is approaching the bound or not, the mugger can get around the bounded utility function that normally protects from mugging.
One way around this dilemma that seems attractive to me is to use some version of Marc Colyvan’s Relative Expected Value theory. This theory, when looking at two options, compares the differences in utility, rather than the total utility of each option. This would seem to defeat the Egyptology objection, if you cannot change how much utility the events in Ancient Egypt were worth, then you don’t factor them into your calculations when considering how close you are to the bound. Similarly, when facing Pascal’s Mugger in the far future, the person does not need to include all their past utility when considering how to respond to the mugger. There may be other approaches like this that discount utility that is unaffected in either choice, I am not sure what the best formulation would be.
However, I am worried that this approach might result in problems with transitivity, or change the ranking of values based on how they are bundled. For example, if an agent with a bounded utility function using Relative Expected Value theory was given offers to play a lottery for $x 1,000 times they might take it each time. However, they might not pay a thousand times as much to enter a lottery for $1,000x. Am I mistaken, or is there a way to calibrate or refine this theory to avoid this transitivity problem?
I would love it if someone had an ideas on this topic. I am very confused and do not know if this is a serious problem or if I am just missing something important about how expected utility theory works.
That makes sense. What I am trying to figure out is, does that threshold credibility change depending on "where you are on the curve." To illustrate this, imagine two altruistic agents, A and B, who have the same bounded utility function. A lives in a horrifying hell world full of misery. B lives in a happy utopia. So A is a lot "closer" to the lower bound than B. Both A and B are confronted by a Pascal's Mugger who threatens them with an arbitrarily huge disutility.
Does the fact that agent B is "farther" from lower bound than agent A mean that the two agents have different credibility thresholds for rejecting the mugger? Because the amount of disutility that B needs to receive to get close to the lower bound is larger than the amount that A needs to receive? Or will their utility functions have the same credibility threshold because they have the same lower and upper bounds, regardless of "how much" utility or disutility they happen to "possess" at the moment? Again, I do not know if this is a coherent question or if it is born out of confusion about how utility functions work.
It seems to me that an agent with a bounded utility function shouldn't need to do any research about the state of the rest of the universe before dismissing Pascal's Mugging and other tiny probabilities of vast utilities as bad deals. That is why this question concerns me.
Thanks, that example made it a lot easier to get my head around the idea! I think understand it better now. This might not be technically accurate, but to me having a uniform rescaling and reshifting of utility that preserves future decisions like that doesn't even feel like I am truly "valuing" future utility less. I know that in some sense I am, but it feels more like I am merely adjusting and recalibrating some technical details of my utility function in order to avoid "bugs" like Pascal's Mugging. It feels similar to making sure that all my preferences are transitive to avoid money pumps, the goal is to have a functional decision theory, rather to to change my fundamental values.