This is a result from the first MIRIx Cambridge workshop (coauthored with Janos and Jim).
One potential problem with bounded utility functions is: what happens when the bound is nearly reached? A bounded utility maximizer will get progressively more and more risk averse as it gets closer to its bound. We decided to investigate what risks it might fear. We used a toy model with a bounded-utility chocolate maximizer, and considered what happens to its resource allocation in the limit as resources go to infinity.
We use "chocolate maximizer'' as conceptual shorthand meaning an agent that we model as though it has a single simple value with a positive long-run marginal resource cost, but only as a simplifying assumption. This is as opposed to a paperclip maximizer, where the inappropriate simplicity is implied to be part of the world, not just part of the model.
Conceptual uncertainty
We found that if a bounded utility function approaches its bound too fast, this has surprising pathological results when mixed with logical uncertainty. Consider a bounded-utility chocolate maximizer, with philosophical uncertainty about what chocolate is. It has a central concept of chocolate , and there are classes of mutated versions of the concept of chocolate
at varying distances from the central concept, such that the probability that the true chocolate is in class
is proportional to
(i.e. following a power law).
Suppose also that utility is bounded using a sigmoid function , where x is the amount of chocolate produced. In the limit as resources go to infinity, what fraction of those resources will be spent on the central class
? That depends which sigmoid function is used, and in particular, how quickly it approaches the utility bound.
Example 1: exponential sigmoid
Suppose we allocate resources to class
, with
for total resource r. Let
.
Then the optimal resource allocation is
Using Lagrange multipliers, we obtain for all i,
Then,
Thus, the resources will be evenly distributed among all the classes as r increases. This is bad, because the resource fraction for the central class goes to 0 as we increase the number of classes.
EDITED: Addendum on asymptotics
Since we have both r and n going to infinity, we can specify their relationship more precisely. We assume that n is the highest number of classes that are assigned nonnegative resources for a given value of r:
Thus,
so the highest class index that gets nonnegative resources satisfies
Example 2: arctan sigmoid
The optimal resource allocation is
Using Lagrange multipliers, we obtain for all i,
Then,
Thus, for the limit of the resource fraction for the central class
is finite and positive.
Conclusion
The arctan sigmoid results in a better limiting resource allocation than the exponential sigmoid, because it has heavier tails (for sufficiently large ). Thus, it matters which bounding sigmoid function you choose.
First of all, the formula for r_i in the decaying-exponential case is wrong.
I don't think this makes sense, for much the same reasons as given by skeptical_lurker.
You only get the even-distribution conclusion by (something like) fixing the number of classes as you let the total resources go to infinity. (Otherwise, the terms involving log(i) can make a large contribution.) But in that situation, your utility goes exponentially fast towards its upper bound of 1 and it's hard to see how that can be viewed as a bad outcome.
You might say it's a suboptimal outcome even though it's a good one, but to make that claim it seems to me you have to do an actual expected-utility calculation. And we know what that expected-utility calculation says: it says that the resource allocation you're objecting to is, in fact, the optimal one.
Or you might say it's a suboptimal outcome because you just know that this allocation is bad, or something. Which amounts to saying that actually you know what the utility function should be and it isn't the one the analysis assumes.
I have some sympathy with that last option. A utility function that not only is bounded but converges exponentially fast towards its bound feels pretty counterintuitive. It's not a big surprise, surely, if such a counterintuitive choice of utility function yields wrong-looking resource allocations?
If both n and r get large, under what circumstances is it still true that the resource allocation is approximately uniform? I suppose that depends on how you define "approximately uniform" but let's try looking at the ratio of r1 to r/n. If my scribbling is correct, this equals
. When n is large this is (very crudely) of order n/r log n. So for any reasonably definition of "approximately uniform" this requires that r be growing at least proportionally to n. Eg., for the ratio to be much below log(n) we require r >= alpha n. And the e... (read more)