"Whoever saves a single life, it is as if he had saved the whole world."
—The Talmud, Sanhedrin 4:5
That was the epigraph Eliezer used on a perfectly nice post reminding us to shut up and multiply when valuing human lives, rather than relying on the (roughly) logarithmic amount of warm fuzzies we'd receive. Implicit in the expected utility calculation is the idea that the value of human lives scales linearly: indeed, Eliezer explicitly says, "I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable."
However, in a comment on Wei Dai's brilliant recent post comparing boredom and altruism, Vladimir Nesov points out that "you can value lives sublinearly" and still make an expected utility calculation rather than relying on warm-fuzzy intuition. This got me thinking about just what the functional form of U(Nliving-persons) might be.
Attacking from the high end (the "marginal" calculation), it seems to me that the utility of human lives is actually superlinear to a modest degree1; that is, U(N+1)-U(N) > U(N)-U(N-1). As an example, consider a parent and young child. If you allow one of them to die, not only do you end that life, but you make the other one significantly worse off. But this generalizes: the marginal person (on average) produces positive net value to society (though being an employee, friend, spouse, etc.) in addition to accruing their own utilons, and economies of scale dictate that adding another person allows a little more specialization and hence a little more efficiency. I.e., the larger the pool of potential co-workers/friends/spouses is, the pickier everyone can be, and the better matches they're likely to end up with. Steven Landsburg (in Fair Play) uses a version of this argument to conclude that children have positive externalities and therefore people on average have fewer children than would be optimal.
In societies with readily available birth control, that is. And naturally, in societies which are insufficiently technological for each marginal person to be able to make a contribution, however indirect, to (e.g.) the food output, it's quite easy for the utility of lives to be sublinear, which is the classical Malthusian problem, and still very much with us in the poorer areas of the world. (In fact, I was recently informed by a humor website that the Black Death had some very positive effects for medieval Europe.)
Now let's consider the other end of the problem (the "inductive" calculation). As an example let's assume that humanity has been mostly supplanted by AIs or some alien species. I would certainly prefer to have at least one human still alive: such a person could represent humanity (and by extension, me), carry on our culture, values and perspective on the universe, and generally push for our agenda. Adding a second human seems far less important—but still quite important, since social interactions (with other humans) are such a vital component of humanity. So adding a third person would be less important still, and so on. A sublinear utility function.
So are the marginal calculation and the inductive calculation inconsistent? I don't think so: it's perfectly possible to have a utility function whose first derivative is complex and non-monotonic. The two calculations are simply presenting two different terms of the function, which are dominant in different regimes. Moreover the linear approximation is probably good enough for most ordinary circumstances; let's just remember that it is an approximation.
1. Note that in these arguments I'm averaging over "ability to create utility" (not to mention "capacity to experience utility").
This isn't clear. Preferences of any actual human seem to form a directed graph, but it's incomplete and can contain cycles. Any way to transform it into a complete acyclic graph (any pair of situations comparable, no preference loops) must differ from the original graph somewhere. Different algorithms will destroy different facets of actual human preference, but there's certainly no algorithm that can preserve all of it; that much we can consider already proven beyond reasonable doubt. It's not obvious to me that there's a single, well-defined, canonical way to perform this surgery.
And it's not at all obvious that going from a single human to an aggregate of all humanity will mitigate the problem (see Torture vs Specks). That's just too many leaps of faith.
I agree/upvoted your point. Human preferences are cyclic. I'd go further and say that without at least having a preference graph that is acyclic it is not possible to optimise a decision at all. The very thought seems meaningless.
Assuming one can establish coherent preferences the question of whether one should optimise for expected utility encounters a further complication. Many human preferences are refer to our actions and not outcomes. An agent could in fact decide to optimise for making 'Right' choices and to hell with the consequences. They could cho... (read more)