"Whoever saves a single life, it is as if he had saved the whole world."
—The Talmud, Sanhedrin 4:5
That was the epigraph Eliezer used on a perfectly nice post reminding us to shut up and multiply when valuing human lives, rather than relying on the (roughly) logarithmic amount of warm fuzzies we'd receive. Implicit in the expected utility calculation is the idea that the value of human lives scales linearly: indeed, Eliezer explicitly says, "I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable."
However, in a comment on Wei Dai's brilliant recent post comparing boredom and altruism, Vladimir Nesov points out that "you can value lives sublinearly" and still make an expected utility calculation rather than relying on warm-fuzzy intuition. This got me thinking about just what the functional form of U(Nliving-persons) might be.
Attacking from the high end (the "marginal" calculation), it seems to me that the utility of human lives is actually superlinear to a modest degree1; that is, U(N+1)-U(N) > U(N)-U(N-1). As an example, consider a parent and young child. If you allow one of them to die, not only do you end that life, but you make the other one significantly worse off. But this generalizes: the marginal person (on average) produces positive net value to society (though being an employee, friend, spouse, etc.) in addition to accruing their own utilons, and economies of scale dictate that adding another person allows a little more specialization and hence a little more efficiency. I.e., the larger the pool of potential co-workers/friends/spouses is, the pickier everyone can be, and the better matches they're likely to end up with. Steven Landsburg (in Fair Play) uses a version of this argument to conclude that children have positive externalities and therefore people on average have fewer children than would be optimal.
In societies with readily available birth control, that is. And naturally, in societies which are insufficiently technological for each marginal person to be able to make a contribution, however indirect, to (e.g.) the food output, it's quite easy for the utility of lives to be sublinear, which is the classical Malthusian problem, and still very much with us in the poorer areas of the world. (In fact, I was recently informed by a humor website that the Black Death had some very positive effects for medieval Europe.)
Now let's consider the other end of the problem (the "inductive" calculation). As an example let's assume that humanity has been mostly supplanted by AIs or some alien species. I would certainly prefer to have at least one human still alive: such a person could represent humanity (and by extension, me), carry on our culture, values and perspective on the universe, and generally push for our agenda. Adding a second human seems far less important—but still quite important, since social interactions (with other humans) are such a vital component of humanity. So adding a third person would be less important still, and so on. A sublinear utility function.
So are the marginal calculation and the inductive calculation inconsistent? I don't think so: it's perfectly possible to have a utility function whose first derivative is complex and non-monotonic. The two calculations are simply presenting two different terms of the function, which are dominant in different regimes. Moreover the linear approximation is probably good enough for most ordinary circumstances; let's just remember that it is an approximation.
1. Note that in these arguments I'm averaging over "ability to create utility" (not to mention "capacity to experience utility").
The project of moving morality from brains into tools is the same project as moving arithmetic from brains into calculators: you are more likely to get a correct answer, and you become able to answer orders of magnitude more difficult questions. If the state of the tool is such that the intuitive answer is better, then one should embrace intuitive answers (for now). The goal is to eventually get a framework that is actually better than intuitive answers in at least some nontrivial area of applicability (or to work in the direction of this goal, while it remains unattainable).
The problem with "moral codes" is that they are mostly insane, in their overconfidence considering rather confused raw material as useful answers. Trying to finally get it right is not the same as welcoming insanity, although the risk is always there.
You say: It's possible to specify a utility function such that, if we feed it to a strong optimization process, the result will be good.
I say: Yeah? Why do you think so? What little evidence we currently have, isn't on your side.