This is a result from the first MIRIx Cambridge workshop (coauthored with Janos and Jim).
One potential problem with bounded utility functions is: what happens when the bound is nearly reached? A bounded utility maximizer will get progressively more and more risk averse as it gets closer to its bound. We decided to investigate what risks it might fear. We used a toy model with a bounded-utility chocolate maximizer, and considered what happens to its resource allocation in the limit as resources go to infinity.
We use "chocolate maximizer'' as conceptual shorthand meaning an agent that we model as though it has a single simple value with a positive long-run marginal resource cost, but only as a simplifying assumption. This is as opposed to a paperclip maximizer, where the inappropriate simplicity is implied to be part of the world, not just part of the model.
Conceptual uncertainty
We found that if a bounded utility function approaches its bound too fast, this has surprising pathological results when mixed with logical uncertainty. Consider a bounded-utility chocolate maximizer, with philosophical uncertainty about what chocolate is. It has a central concept of chocolate , and there are classes of mutated versions of the concept of chocolate
at varying distances from the central concept, such that the probability that the true chocolate is in class
is proportional to
(i.e. following a power law).
Suppose also that utility is bounded using a sigmoid function , where x is the amount of chocolate produced. In the limit as resources go to infinity, what fraction of those resources will be spent on the central class
? That depends which sigmoid function is used, and in particular, how quickly it approaches the utility bound.
Example 1: exponential sigmoid
Suppose we allocate resources to class
, with
for total resource r. Let
.
Then the optimal resource allocation is
Using Lagrange multipliers, we obtain for all i,
Then,
Thus, the resources will be evenly distributed among all the classes as r increases. This is bad, because the resource fraction for the central class goes to 0 as we increase the number of classes.
EDITED: Addendum on asymptotics
Since we have both r and n going to infinity, we can specify their relationship more precisely. We assume that n is the highest number of classes that are assigned nonnegative resources for a given value of r:
Thus,
so the highest class index that gets nonnegative resources satisfies
Example 2: arctan sigmoid
The optimal resource allocation is
Using Lagrange multipliers, we obtain for all i,
Then,
Thus, for the limit of the resource fraction for the central class
is finite and positive.
Conclusion
The arctan sigmoid results in a better limiting resource allocation than the exponential sigmoid, because it has heavier tails (for sufficiently large ). Thus, it matters which bounding sigmoid function you choose.
All the underlying axioms of expected utility theory (EUT) seem self-evident to me. The fact that most people don't shut up and multiply is something I would regard as more of their problem then a problem with EUT. Having said that, even if mapping emotions onto utility values makes sense from some abstract theoretical point of view, its a lot harder in practice due to reasons such as the complex fragility of human values which have been thoroughly discussed already.
Of course, the degree to which the average LWer approximates EUT in their feelings and behaviour is probably far greater than that of the average person. At non-LW philosophy meetups I have been told I am 'disturbingly analytical' for advocating EUT.
Well, I suppose there is the option of 'empathic AI'. Reverse engineering the brain and dialling compassion up to 11 is in many ways easier and more brute-force-able than creating de novo AI and it avoids all these defining utility function problems, the Basilisk, and Lob's theory. The downsides of course include a far greater unpredictability, the AI would definitely be sentient and some would argue the possibility of catastrophic failure during self-modification.
I didn't say that we shouldn't have a utility function, I said we don't. Our actual preferences are incompletely defined, inconsistent, and generally a mess. I suspect this is true even for most LWers, and I'm pretty much certain it's true for almost all people, and (in so far as it's meaningful) for the human race as a whole.