"Utilitarianism" has two different, but related meanings. Historically, it generally means
"the morally right action is the action that produces the most good", or as Bentham put it, "the greatest amount of good for the greatest number". Leave aside for the moment that this ignores the tradeoff between how much good and how many people, and exactly what the good is. Bentham and like-minded thinkers mean by "good" things like material well-being, flourishing, "happiness", and so on. They are pointing in a certain direction, even if a bit vaguely. Utilitarianism in this sense is about people, and its conception of the good consists of what humans generally want. It is necessarily expressed in terms of human concepts, because that is what it is about.
The other thing that the word "utilitarianism" has become used for is the thing that various theorems prove can be constructed from a preference relation satisfying certain axioms. Von Neumann and Morgenstern are the usual names mentioned, but there are also Savage, Cox, and others. Collectively, these are, as Eliezer has put it, "multiple spotlights all shining on the same core mathematical structure". The theory is independent of of any specific preference relation and of what the utility function determined by those preferences comes out to be. (ETA: This use of the word might be specific to the rationalist community. "Utility theory" is I think the more widely used term. Accordingly I've replaced "VNMU" by VNMUT" below.)
To distinguish these two concepts I shall call them "Benthamite utilitarianism" and "Von Neuman-Morgenstern utility theory", or BU and VNMUT for short. How do they relate to each other, and what does either have to say about AI?
- BU has a specific notion of the individual good. VNMUT does not. VNMUT is concerned only with the structure of the preference relation, not its content. In VNMUT, the preference relation is anything satisfying the axioms; in BU it is a specific thing, not up for grabs, described by words such as "welfare", "happiness", or "satisfaction".
By analogy: BU is like studying the structure of some particular group, such as the Monster Group, while VNMUT is like group theory, which studies all groups and does not care where they came from or what they are used for.
-
VNMUT is made of theorems. BU is not. BU contains no mathematical structure to elucidate what is meant by "the greatest good for the greatest number". The slogan is a rallying call, but leaves many hard decisions to be made.
-
Neither BU nor VNMUT have a satisfactory concept of collective good. BU is silent about the tradeoff between the greatest good and the greatest number. There is no generally agreed on extension of VNMUT to mathematically construct a collective preference relation or utility function. There have been many attempts, on both the practical side (BU) and the theoretical side (VNMUT), but the body of such work does not have the coherence of those "multiple spotlights all shining on the same core mathematical structure". The differing attitudes we observe to the Repugnant Conclusion illustrate the lack of consensus.
What do either of these have to do with AI?
If a program is trained to produce outputs that maximise some objective function, that value is at least similar to a utility in the VNMUT sense, although it is not derived from a preference relation. The utility (objective function) is primitive and a preference relation can be derived from it: the program "prefers" a higher value to a lower.
As for BU, whether a program optimises for the human good is up to what its designers choose to have it optimise. Optimise for deadly poisons and that may be what you get. (I don't know if anyone has experimented with the compounds that that experiment suggested, although it seems to me quite likely that some military lab somewhere is doing so, if they weren't already.) Optimise for peace and love, and maybe you get something like that, or maybe you end up painting smiley faces onto everything. The AI itself is not feeling or emoting. Its concepts of "welfare", "happiness", or "satisfaction", such as they are, are embodied in the training procedure its programmers used to judge its outputs as desired or undesired.
So we have that
but at the same time
And I can see how starting from this you would get that U(⊤)=0. However, I think one of the remaining confusions is how you would go in the other direction. How can you go from the premise that we shift utilities to be 0 for tautologies to say that we value something to a large part from how unlikely it is.
And then we also have the desirability axiom
U(A∨B)=P(A)U(A)+P(B)U(B)P(A)+P(B)
for all A and B such that P(A∧B)=0 together with Bayesian probability theory.
What I was talking about in my previous comment goes against the desirability axiom in the sense that I meant that for X="Sun with probability p and rain with probability (1−p)" in the more general case there could be subjects that prefer certain outcomes proportionally more (or less) than usual such that U(X)≠pU(Sun)+(1−p)U(Rain) for some probabilities p. As the equality derives directly from the desirability axiom, it was wrong of me to generalise that far.
But, to get back to the confusion at hand we need to unpack the tautology axiom a bit. If we say that a proposition ⊤ is a tautology if and only if P(⊤)=1 [1], then we can see that any proposition that is no news to us has zero utils as well.
And I think it might be well to keep in mind that learning that e.g. sun tomorrow is more probable than we once thought does not necessarily make us prefer sun tomorrow less, but the amount of utils for sun tomorrow has decreased (in an absolute sense). This comes in nicely with the money analogy because you wouldn't buy something that you expect with certainty anyway[2], but this doesn't mean that you prefer it any less compared to some other worse outcome that you expected some time earlier. It is just that we've updated from our observations such that the utility function now reflects our current beliefs. If you prefer A to B then this is a fact regardless of the probabilities of those outcomes. When the probabilities change, what is changing is the mapping from proposition to real number (the utility function) and it is only changing with an shift (and possibly scaling) by a real number.
At least that is the interpretation that I've done.
This seems reasonable but non-trivial to prove depending on how we translate between logic and probability. ↩︎
If you do, you either don't actually expect it or has a bad sense of business. ↩︎