This is a result from the first MIRIx Cambridge workshop (coauthored with Janos and Jim).
One potential problem with bounded utility functions is: what happens when the bound is nearly reached? A bounded utility maximizer will get progressively more and more risk averse as it gets closer to its bound. We decided to investigate what risks it might fear. We used a toy model with a bounded-utility chocolate maximizer, and considered what happens to its resource allocation in the limit as resources go to infinity.
We use "chocolate maximizer'' as conceptual shorthand meaning an agent that we model as though it has a single simple value with a positive long-run marginal resource cost, but only as a simplifying assumption. This is as opposed to a paperclip maximizer, where the inappropriate simplicity is implied to be part of the world, not just part of the model.
Conceptual uncertainty
We found that if a bounded utility function approaches its bound too fast, this has surprising pathological results when mixed with logical uncertainty. Consider a bounded-utility chocolate maximizer, with philosophical uncertainty about what chocolate is. It has a central concept of chocolate , and there are classes of mutated versions of the concept of chocolate
at varying distances from the central concept, such that the probability that the true chocolate is in class
is proportional to
(i.e. following a power law).
Suppose also that utility is bounded using a sigmoid function , where x is the amount of chocolate produced. In the limit as resources go to infinity, what fraction of those resources will be spent on the central class
? That depends which sigmoid function is used, and in particular, how quickly it approaches the utility bound.
Example 1: exponential sigmoid
Suppose we allocate resources to class
, with
for total resource r. Let
.
Then the optimal resource allocation is
Using Lagrange multipliers, we obtain for all i,
Then,
Thus, the resources will be evenly distributed among all the classes as r increases. This is bad, because the resource fraction for the central class goes to 0 as we increase the number of classes.
EDITED: Addendum on asymptotics
Since we have both r and n going to infinity, we can specify their relationship more precisely. We assume that n is the highest number of classes that are assigned nonnegative resources for a given value of r:
Thus,
so the highest class index that gets nonnegative resources satisfies
Example 2: arctan sigmoid
The optimal resource allocation is
Using Lagrange multipliers, we obtain for all i,
Then,
Thus, for the limit of the resource fraction for the central class
is finite and positive.
Conclusion
The arctan sigmoid results in a better limiting resource allocation than the exponential sigmoid, because it has heavier tails (for sufficiently large ). Thus, it matters which bounding sigmoid function you choose.
AFAICT the problem is that if the space of chocolate concepts is significantly larger than the amount of resources available then the amount of resources spent on any one concept (inc the true concept) will be infinitesimal.
But that's a situation in which we have a vast number of things that might somewhat-plausibly turn out to be chocolate and severely limited resources. It's not obvious that we can do better.
"But we do OK if we use one sigmoid utility function and not if we use another!"
No, we do different things depending on our utility function. That isn't a problem; it's what utility functions are for. And what's "OK" depends on what the probabilities are, what your resources are, and how much you value different amounts of chocolate. Which, again, is not a problem but exactly how things should be.