"Utilitarianism" has two different, but related meanings. Historically, it generally means
"the morally right action is the action that produces the most good", or as Bentham put it, "the greatest amount of good for the greatest number". Leave aside for the moment that this ignores the tradeoff between how much good and how many people, and exactly what the good is. Bentham and like-minded thinkers mean by "good" things like material well-being, flourishing, "happiness", and so on. They are pointing in a certain direction, even if a bit vaguely. Utilitarianism in this sense is about people, and its conception of the good consists of what humans generally want. It is necessarily expressed in terms of human concepts, because that is what it is about.
The other thing that the word "utilitarianism" has become used for is the thing that various theorems prove can be constructed from a preference relation satisfying certain axioms. Von Neumann and Morgenstern are the usual names mentioned, but there are also Savage, Cox, and others. Collectively, these are, as Eliezer has put it, "multiple spotlights all shining on the same core mathematical structure". The theory is independent of of any specific preference relation and of what the utility function determined by those preferences comes out to be. (ETA: This use of the word might be specific to the rationalist community. "Utility theory" is I think the more widely used term. Accordingly I've replaced "VNMU" by VNMUT" below.)
To distinguish these two concepts I shall call them "Benthamite utilitarianism" and "Von Neuman-Morgenstern utility theory", or BU and VNMUT for short. How do they relate to each other, and what does either have to say about AI?
- BU has a specific notion of the individual good. VNMUT does not. VNMUT is concerned only with the structure of the preference relation, not its content. In VNMUT, the preference relation is anything satisfying the axioms; in BU it is a specific thing, not up for grabs, described by words such as "welfare", "happiness", or "satisfaction".
By analogy: BU is like studying the structure of some particular group, such as the Monster Group, while VNMUT is like group theory, which studies all groups and does not care where they came from or what they are used for.
-
VNMUT is made of theorems. BU is not. BU contains no mathematical structure to elucidate what is meant by "the greatest good for the greatest number". The slogan is a rallying call, but leaves many hard decisions to be made.
-
Neither BU nor VNMUT have a satisfactory concept of collective good. BU is silent about the tradeoff between the greatest good and the greatest number. There is no generally agreed on extension of VNMUT to mathematically construct a collective preference relation or utility function. There have been many attempts, on both the practical side (BU) and the theoretical side (VNMUT), but the body of such work does not have the coherence of those "multiple spotlights all shining on the same core mathematical structure". The differing attitudes we observe to the Repugnant Conclusion illustrate the lack of consensus.
What do either of these have to do with AI?
If a program is trained to produce outputs that maximise some objective function, that value is at least similar to a utility in the VNMUT sense, although it is not derived from a preference relation. The utility (objective function) is primitive and a preference relation can be derived from it: the program "prefers" a higher value to a lower.
As for BU, whether a program optimises for the human good is up to what its designers choose to have it optimise. Optimise for deadly poisons and that may be what you get. (I don't know if anyone has experimented with the compounds that that experiment suggested, although it seems to me quite likely that some military lab somewhere is doing so, if they weren't already.) Optimise for peace and love, and maybe you get something like that, or maybe you end up painting smiley faces onto everything. The AI itself is not feeling or emoting. Its concepts of "welfare", "happiness", or "satisfaction", such as they are, are embodied in the training procedure its programmers used to judge its outputs as desired or undesired.
Regarding the time stamp: Yeah, this is the right way to think about it, at least in the case of subjective utility theory, where utilities represent desires, and probabilities represent beliefs, and it also the right way to think about for Bayesianism (subjective probability theory). U and P only represent the subjective state of an agent at a particular point in time. They don't say anything how they should be changed over time. They only say that at any point in time, these functions (the agents) should satisfy the axioms.
Rules for change over time would need separate assumptions. In Bayesian probability theory this is usually the rule of classical conditionalization or the more general rule of Jeffrey conditionalization. (Bayes' theorem alone doesn't say anything about updating. Bayes' rule = classical conditionalization + Bayes' theorem)
Regarding the utility of a, you write the probability part in the sum is P(ω|a)−P(ω). But it is actually just P(ω|a)!
To see this, start with the desirability axiom: U(A∨B)=P(A)U(A)+P(B)U(B)P(A)+P(B) This doesn't tell us how to calculate U(A), only U(A∨B). But we can write A as the logically equivalent (A∧B)∨(A∧¬B)). This is a disjunction, so we can apply the desirability axiom: U(A)=U((A∧B)∨(A∧¬B))=P(A∧B)U(A∧B)+P(A∧¬B)U(A∧¬B)P(A∧B)+P(A∧¬B) This is equal to U(A)=P(A∧B)U(A∧B)+P(A∧¬B)U(A∧¬B)P(A). Since P(A∧B)P(A)=P(B|A), we have U(A)=P(B|A)U(A∧B)+P(¬B|A)U(A∧¬B). Since A was chosen arbitrarily, it can be any proposition whatsoever. And since in Jeffrey's framework we only consider propositions, all actions are also described by propositions. Presumably of the form "I now do x". Hence, U(a)=P(B|a)U(a∧B)+P(¬B|a)U(a∧¬B) for any B.
This proof could also be extended to longer disjunctions between mutually exclusive propositions apart from B and ¬B. Hence, for a set S of mutually exclusive propositions s, U(a)=∑s∈SP(s|a)U(a∧s). The set Ω, the "set of all outcomes", is a special case of S where the mutually exclusive elements ω of Ω sum to 1. One interpretation is to regard each ω as describing one complete possible world. So, U(a)=∑ω∈ΩP(ω|a)U(a∧ω). But of course this holds for any proposition, not just an action a. This is the elegant thing about Jeffrey's decision theory which makes it so general: He doesn't need special types of objects (acts, states of the world, outcomes etc) and definitions associated with those.
Regarding the general formula for U(A∨B). Your suggestion makes sense, I also think it should be expressible in terms of U(A), U(B), and U(A∧B). I think I've got a proof.
Consider (A∧B)∨(A∧¬B)∨(¬A∧B)∨(¬A∧¬B)=⊤. The disjunctions are exclusive. By the expected utility hypothesis (which should be provable from the desirability axiom) and by the U(⊤)=0 assumption, we have 0=E(U(A∧B))+E(U(A∧¬B))+E(U(¬A∧B))+E(U(¬A∧¬B)). Then subtract the last term: −E(U(¬A∧¬B))=E(U(A∧B))+E(U(A∧¬B))+E(U(¬A∧B)). Now since E(U(A))+E(U(¬A))=0 for any A, we have E(U(¬A))=−E(U(A)). Hence, −E(U(¬A∧¬B)=E(U(¬(¬A∧¬B)). By De Morgan, ¬(¬A∧¬B)=A∨B. Therefore E(U(A∨B))=E(U(A∧B))+E(U(A∧¬B))+E(U(¬A∧B)). Now add E(U(A∧B)) to both sides: E(U(A∨B))+E(U(A∧B))=2E(U(A∧B))+E(U(A∧¬B))+E(U(¬A∧B)). Notice that A=(A∧B)∨(A∧¬B) and B=(A∧B)∨(¬A∧B). Therefore we can write E(U(A∨B))+E(U(A∧B))=E(U(A))+E(U(B)). Now subtract E(U(A∧B)) and we have E(U(A∨B))=E(U(A))+E(U(B))−E(U(A∧B)). which is equal to P(A∨B)U(A∨B)=P(A)U(A)+P(B)U(B)−P(A∧B)U(A∧B). So we have U(A∨B)=P(A)U(A)+P(B)U(B)−P(A∧B)U(A∧B)P(A∨B). and hence our theorem U(A∨B)=P(A)U(A)+P(B)U(B)−P(A∧B)U(A∧B)P(A)+P(B)−P(A∧B) which we can also write as U(A∨B)=P(A|A∨B)U(A)+P(B|A∨B)U(B)−P(A∧B|A∨B)U(A∧B). Success!
Okay, now with U(A∨B) solved, what about the definition of U(A|B)? I think I got it: U(A|B):=U(A∧B)−U(B) This correctly predicts that U(A|A)=0. And it immediately leads to the plausible consequence U(A∧B)=U(A|B)+U(B). I don't know how to further check whether this is the right definition, but I'm pretty sure it is.