"Utilitarianism" has two different, but related meanings. Historically, it generally means
"the morally right action is the action that produces the most good", or as Bentham put it, "the greatest amount of good for the greatest number". Leave aside for the moment that this ignores the tradeoff between how much good and how many people, and exactly what the good is. Bentham and like-minded thinkers mean by "good" things like material well-being, flourishing, "happiness", and so on. They are pointing in a certain direction, even if a bit vaguely. Utilitarianism in this sense is about people, and its conception of the good consists of what humans generally want. It is necessarily expressed in terms of human concepts, because that is what it is about.
The other thing that the word "utilitarianism" has become used for is the thing that various theorems prove can be constructed from a preference relation satisfying certain axioms. Von Neumann and Morgenstern are the usual names mentioned, but there are also Savage, Cox, and others. Collectively, these are, as Eliezer has put it, "multiple spotlights all shining on the same core mathematical structure". The theory is independent of of any specific preference relation and of what the utility function determined by those preferences comes out to be. (ETA: This use of the word might be specific to the rationalist community. "Utility theory" is I think the more widely used term. Accordingly I've replaced "VNMU" by VNMUT" below.)
To distinguish these two concepts I shall call them "Benthamite utilitarianism" and "Von Neuman-Morgenstern utility theory", or BU and VNMUT for short. How do they relate to each other, and what does either have to say about AI?
- BU has a specific notion of the individual good. VNMUT does not. VNMUT is concerned only with the structure of the preference relation, not its content. In VNMUT, the preference relation is anything satisfying the axioms; in BU it is a specific thing, not up for grabs, described by words such as "welfare", "happiness", or "satisfaction".
By analogy: BU is like studying the structure of some particular group, such as the Monster Group, while VNMUT is like group theory, which studies all groups and does not care where they came from or what they are used for.
-
VNMUT is made of theorems. BU is not. BU contains no mathematical structure to elucidate what is meant by "the greatest good for the greatest number". The slogan is a rallying call, but leaves many hard decisions to be made.
-
Neither BU nor VNMUT have a satisfactory concept of collective good. BU is silent about the tradeoff between the greatest good and the greatest number. There is no generally agreed on extension of VNMUT to mathematically construct a collective preference relation or utility function. There have been many attempts, on both the practical side (BU) and the theoretical side (VNMUT), but the body of such work does not have the coherence of those "multiple spotlights all shining on the same core mathematical structure". The differing attitudes we observe to the Repugnant Conclusion illustrate the lack of consensus.
What do either of these have to do with AI?
If a program is trained to produce outputs that maximise some objective function, that value is at least similar to a utility in the VNMUT sense, although it is not derived from a preference relation. The utility (objective function) is primitive and a preference relation can be derived from it: the program "prefers" a higher value to a lower.
As for BU, whether a program optimises for the human good is up to what its designers choose to have it optimise. Optimise for deadly poisons and that may be what you get. (I don't know if anyone has experimented with the compounds that that experiment suggested, although it seems to me quite likely that some military lab somewhere is doing so, if they weren't already.) Optimise for peace and love, and maybe you get something like that, or maybe you end up painting smiley faces onto everything. The AI itself is not feeling or emoting. Its concepts of "welfare", "happiness", or "satisfaction", such as they are, are embodied in the training procedure its programmers used to judge its outputs as desired or undesired.
[note: anti-realist non-Utilitarian here; I don't believe "utility" is actually a universal measurable thing, nor that it's comparable across entities (nor across time for any real entity). Consider this my attempt at an ITT on this topic for Utilitarianism]
One possible answer is that it's true that those emotions are pretty core to most people's conception of utility (at least most people I've discussed it with). But this does NOT mean that the emotions ARE the utility, they're just an evolved mechanism which points to utility, and not necessarily the only possible mechanism. Goodhart's Law hits pretty hard if you think of the emotions directly as utility.
Utility itself is an abstraction over the level of satisfaction of goals/preferences about the state of the universe for an entity. Or in some conceptions, the eu-satisfaction of the goals the entity would have if it were fully informed.
>Utility itself is an abstraction over the level of satisfaction of goals/preferences about the state of the universe for an entity.
You can say that a robot toy has a goal of following a light source. Or thermostat has a goal of keeping the room temperature at a certain setting. But I'm yet to hear anyone counting those things towards total utility calculations.
Of course a counterargument would be "but those are not actual goals, those are the goals of humans that set it", but in this case you've just hidden all the references to humans into the word "goal" and are back to square 1.