This is a result from the first MIRIx Cambridge workshop (coauthored with Janos and Jim).
One potential problem with bounded utility functions is: what happens when the bound is nearly reached? A bounded utility maximizer will get progressively more and more risk averse as it gets closer to its bound. We decided to investigate what risks it might fear. We used a toy model with a bounded-utility chocolate maximizer, and considered what happens to its resource allocation in the limit as resources go to infinity.
We use "chocolate maximizer'' as conceptual shorthand meaning an agent that we model as though it has a single simple value with a positive long-run marginal resource cost, but only as a simplifying assumption. This is as opposed to a paperclip maximizer, where the inappropriate simplicity is implied to be part of the world, not just part of the model.
Conceptual uncertainty
We found that if a bounded utility function approaches its bound too fast, this has surprising pathological results when mixed with logical uncertainty. Consider a bounded-utility chocolate maximizer, with philosophical uncertainty about what chocolate is. It has a central concept of chocolate , and there are classes of mutated versions of the concept of chocolate
at varying distances from the central concept, such that the probability that the true chocolate is in class
is proportional to
(i.e. following a power law).
Suppose also that utility is bounded using a sigmoid function , where x is the amount of chocolate produced. In the limit as resources go to infinity, what fraction of those resources will be spent on the central class
? That depends which sigmoid function is used, and in particular, how quickly it approaches the utility bound.
Example 1: exponential sigmoid
Suppose we allocate resources to class
, with
for total resource r. Let
.
Then the optimal resource allocation is
Using Lagrange multipliers, we obtain for all i,
Then,
Thus, the resources will be evenly distributed among all the classes as r increases. This is bad, because the resource fraction for the central class goes to 0 as we increase the number of classes.
EDITED: Addendum on asymptotics
Since we have both r and n going to infinity, we can specify their relationship more precisely. We assume that n is the highest number of classes that are assigned nonnegative resources for a given value of r:
Thus,
so the highest class index that gets nonnegative resources satisfies
Example 2: arctan sigmoid
The optimal resource allocation is
Using Lagrange multipliers, we obtain for all i,
Then,
Thus, for the limit of the resource fraction for the central class
is finite and positive.
Conclusion
The arctan sigmoid results in a better limiting resource allocation than the exponential sigmoid, because it has heavier tails (for sufficiently large ). Thus, it matters which bounding sigmoid function you choose.
If we have enough uncertainty about what bit of concept space we're looking for to make a power-law distribution appropriate, then "very large" can still be "severely limited" (and indeed must be to make the amount of resources going to each kind of maybe-chocolate be small).
Yes. But I wouldn't characterize this as giving the AI an approximation to our utility function that avoids problems to do with infinity -- because I don't think we have a utility function in a strong enough sense for this to be distinguishable from giving the AI our utility function. We have a vague hazy idea of utility that we can (unreliably, with great effort) by a little bit quantitative about in "small" easy cases; we don't truly either feel or behave according to any utility function; but we want to give the AI a utility function that will make it do things we approve of, even though its decisions may be influenced by looking at things far beyond our cognitive capacity.
It's not clear to me that that's a sensible project at all, but it certainly isn't anything so simple as taking something that Really Is our utility function but misbehaves "at infinity" and patching it to tame the misbehaviour :-).
All the underlying axioms of expected utility theory (EUT) seem self-evident to me. The fact that most people don't shut up and multiply is something I would regard as more of their problem then a problem with EUT. Having said that, even if mapping emotions onto utility values makes sense from some abstract theoretical point of view, its a lot harder in practice due to reasons such as the complex fragility of human values which have been thoroughly discussed already.
Of course, the degree t... (read more)