For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich).
This empirical claim seems ludicrously wrong, which I find distracting from the ethical claims. Most people in rich countries (except for those unable or unwilling to work or produce kids who will) are increasing the rate of technological advance by creating demand for improved versions of products, paying taxes, contributing to the above-average local political cultures, and similar. Such advance dominates resource consumption in affecting the welfare of the global poor (and long-term welfare of future people). They make charitable donations or buy products that enrich people like Bill Gates and Warren Buffett who make highly effective donations, and pay taxes for international aid.
The scientists and farmers use thousands of products and infrastructure provided by the rest of society, and this neglects industry, resource extract...
There is no natural scale on which to compare utility functions. [...] Unless your theory comes with a particular [interpersonal utility comparison] method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.
This, in my opinion, is by itself a decisive argument against utilitarianism. Without these ghostly "utilities" that are supposed to be measurable and comparable interpersonally, the whole concept doesn't even being to make sense. And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.
Note that the problem is much more fundamental than just the mathematical difficulties and counter-intuitive implications of formal utilitarian theories. Even if there were no such problems, it would still be the case that the whole theory rests on an entirely imaginary foundation. Ultimately, it's a system that postulates some metaphysical entities and a categorical moral imperative stated in terms of the supposed state of these entitie...
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness.
I stopped reading here. To me, "total utilitarianism" means maximizing the sum of the values of individual lives. There's nothing forcing a total utilitarian to value a life by adding the happiness experienced in each moment of the life, without further regard to how the moments fit together (e.g. whether they fulfill someone's age-old hopes).
In general, people seem to mean different things by "utilitarianism", so any criticism needs to spell out what version of utilitarianism it's attacking, and acknowledge that the particular version of utilitarianism may not include everyone who self-identifies as a utilitarian.
A utility function does not compel total (or average) utilitarianism
Does anyone actually think this? Thinking that utility functions are the right way to talk about rationality !=> utilitarianism. Or any moral theory, as far as I can tell. I don't think I've seen anyone on LW actually arguing that implication, although I think most would affirm the antecedent.
...There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this... If all these steps increase the qualit
What seems to be overlooked in most discussions about total hedonistic utiltiarianism is that the proponents often have a specific (Parfitean) view about personal identity. Which leads to either empty or open individualism. Based on that, they may hold that it is no more rational to care about one's own future self than it is to care about any other future self. "Killing" a being would then just be failing to let a new moment of consciousness come into existence. And any notions of "preferences" would not really make sense anymore, only instrumentally.
A smaller critique of total utilitarianism:
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).
You can just finish there.
(In case the "sufficient cause to reject total utilitarianism" isn't clear: I don't like murder. Total utilitarianism advocates it in all sorts of scenarios that I would not. Therefore, total utilitarianism is Evil.)
Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers.
I think you actually slightly understate the case against Utilitarianism. Yes, Classical Economics uses expected utility maximisers - but it prefers to deal with Pareto Improvements (or Kaldor-Hicks improvements) than try to do inter-personal utility comparisons.
Total utilitarianism is defined as maximising the sum of everyone's individual utility function.
That seems misleading. Most of the time "total utiltiarianism" refers to what should actually be called "hedonistic total utilitarianism". And what is maximized there is the suprlus of happiness over suffering (positive hedonic states over negative ones), which isn't necessarily synonymous with individual utility functions.
There are three different parameters for the various kinds of utilitarianism: It can either be total or average or pr...
Upvoted, but as someone who, without quite being a total utilitarian, at least hopes someone might be able to rescue total utilitarianism, I don't find much to disagree with here. Points 1, 4, 5, and 6 are arguments against certain claims that total utilitarianism should be obviously true, but not arguments that it doesn't happen to be true.
Point 2 states that total utilitarianism won't magically implement itself and requires "technology" rather than philosophy; that is, people have to come up with specific contingent techniques of estimating ut...
There's how I see this issue (from philosophical point of view):
Moral value is, in the most general form, a function of a state of a structure, for lack of better word. The structure may be just 10 neurons in isolation, for which the moral worth may well be exact zero, or it may be 7 billion blobs of about 10^11 neurons who communicate with each other, or it may be a lot of data on a hard drive, representing a stored upload.
The moral value of two interconnected structures, in general, does not equal the sum of moral value of each structure (example: whole ...
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).
Just wanted to note that this is too strong a statement. There is no requirement for the 1:1 ratio in "total utilitarianism". You end up with the "repugnant conclusion" to the Parfit's "mere addition" argument as long as this ratio is finite (known as "birth-death asymmetry"). For example, one may argue ...
You know, I've felt that examining the dust speck vs torture dilemma or stuff like that, finding a way to derive an intuitively false conclusion from intuitively true premises, and thereby concluding that the conclusion must be true after all (rather than there's some kind of flaw in the proof you can't see yet) is analogous to seeing a proof that 0 equals 1 or that a hamburger is better than eternal happiness or that no feather is dark, not seeing the mistake in the proof straight away, and thereby concluding that the conclusion must be true. Does anyone else feel the same?
Sure.
But it's not like continuing to endorse my intuitions in the absence of any justification for them, on the assumption that all arguments that run counter to my intuitions, however solid they may seem, must be wrong because my intuitions say so, is noticeably more admirable.
When my intuitions point in one direction and my reason points in another, my preference is to endorse neither direction until I've thought through the problem more carefully. What I find often happens is that on careful thought, my whole understanding of the problem tends to alter, after which I may end up rejecting both of those directions.
Yes. The trouble with "shut up and multiply" - beyond assuming that humans have a utility function at all - is assuming that utility works like conventional arithmetic and that you can in fact multiply.
There's also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes "scope insensitivity." The error is assuming this means that people are scope-insensitive, rather than to realise that people aren't buying saved birds at all, but are paying what they're willing to pay for warm fuzzies in general - a constant amount.
The attraction of utilitarianism is that calculating actions would be so much simpler if utility functions existed and their output could be added with the same sort of rules as conventional arithmetic. This does not, however, constitute non-negligible evidence that any of the required assumptions hold.
If arithmetical utilitarianism works so well, it would work in weird territory.
Note the bank robbery thread below. Someone claims that "the utilitarian math" shows that robbing banks and donating to charity would have the best consequences. But they don't do any math or look up basic statistics to do a Fermi calculation. A few minutes of effort shows that bank robbery actually pays much worse than working as a bank teller over the course of a career (including jail time, etc).
In Giving What We Can there are several people who donate half their income (or all income above a Western middle class standard of living) to highly efficient charities helping people in the developing world. They expect to donate millions of dollars over their careers, and to have large effects on others through their examples and reputations, both as individuals and via their impact on organizations like Giving What We Can. They do try to actually work things out, and basic calculations easily show that running around stealing organs or robbing banks would have terrible consequences, thanks to strong empirical regularities:
Crime mostly doesn't pay. Bank robbers, drug dealers, and the like ma
Because, if you do the utilitarian math, robbing banks and giving them to charity is still a good deal
Bank robbery is actually unprofitable. Even setting aside reputation (personal and for one's ethos), "what if others reasoned similarly," the negative consequences of the robbery, and so forth you'd generate more expected income working an honest job. This isn't a coincidence. Bank robbery hurts banks, insurers, and ultimately bank customers, and so they are willing to pay to make it unprofitable.
According to a study by British researchers Barry Reilly, Neil Rickman and Robert Witt written up in this month's issue of the journal Significance, the average take from a U.S. bank robbery is $4,330. To put that in perspective, PayScale.com says bank tellers can earn as much as $28,205 annually. So, a bank robber would have to knock over more than six banks, facing increasing risk with each robbery, in a year to match the salary of the tellers he's holding up.
Another problem with the repugnant conclusion is economic: it assumes that the cost of creating and maintaining additional barely-worth-living people is negligibly small.
The problem here seems to be about the theories not taking all things we value into account. It's therefore less certain whether their functions actually match our morals. If you calculate utility using only some of your utility values, you're not going to get the correct result. If you're trying to sum the set {1,2,3,4} but you only use 1, 2 and 4 in the calculation, you're going to get the wrong answer. Outside of special cases like "multiply each item by zero" it doesn't matter whether you add, subtract or divide, the answer will still be wron...
It's a good and thoughtful post.
Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game - anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined "more equal" society?
I wonder if it makes sense to model a separate variable in the global utility function f...
For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations
What is happiness? If happiness is the "utility" that people maximise (is it?), and the richer are only slightly happier than the poorer (cite?), why is it that when people have the opportunity to vote with their feet, people in poor nations flock to richer nations whenever they can, and do not want to return?
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so.
I dare to say that no self-professed "total utilitarian" actually aliefs this.
A more general problem with utilitarianisms including those that evade the critique in that article:
Suppose we have a computer running a brain sim (along with VR environment). The brain sim works as following: given current state, next state is calculated (using multiple cpus in parallel); the current state is read only, the next state is write only. Think arrays of synaptic values. After all of the next state is calculated, the arrays are switched and the old state data is written over . This is a reductionist model of 'living' that is rather easy to thin...
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks.
Keep in mind that the people being brought into existence will be equally real people, with dreams, aspirations, grudges, and a...
An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc... may have some validity, total utilitarianism wins as the population increases because the others don't scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.
I'll admit I haven't encountered this argument before, but to me it looks like a type...
I like that article. I wrote some on other problem with utilitarianism.
Also, by the way, regarding the use of name of Bayes, you really should thoroughly understand this paper and also get some practice solving belief propagation approximately on not so small networks full of loops and cycles (or any roughly isomorphic problem), to form opinion on self described Bayesianists.
...And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular I
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).
Other members of society typically fail to approve of murder, and would apply sanctions to the utilitarian - probably hindering them in their pursuit of total utility. So, in practice, a human being pursuing total utilitarianism would simply not act in this way.
Good article! Here are a few related questions:
The problem of comparing different people's utility functions applies to average utilitarianism as well, doesn't it? For instance if your utility function is U and my utility function is V, then the average could be (U + V)/2 : however utility functions can be rescaled by any linear function, so let's make mine 1000000 x V. Now the average is U/2 + 500000 x V, which seems totally fair doesn't it? Is the right solution here to assume that each person's utility has a "best possible" case, and a "
I think I agree with your conclusion. But this:
to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich).
requires you to assume that the US or "the rich" have no relevant chance of producing vastly happier people in the future. This seems stronger than denying the singularity as such. And it makes targeted killing feel much more attractive to this misanthrope.
Only a slightly relevant question which nevertheless I haven't yet seen addressed: If a utilitarian desires to maximise other people's utilities and the other people are utilitarians themselves, also deriving their utility from the utilities of others (the original utilitarian included), doesn't that make utilitarianism impossible to define? The consensus seems to be that one can't take one's own mental states for argument of one's own utility function. But utilitarians rarely object to plugging others' mental states into their utility functions, so the danger of circularity isn't avoided. Is there some clever solution to this?
Why then is it so popular? Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers.
Surely that is not the reason. Firstly, utilitarianism is not that popular. My theory about why it has any adherents at all is that it is used for signalling purposes. One use of moral systems is to broadcast what a nice person you are. Utilitarianism is a super-unselfish moral system. So, those looking for a niceness superstimulus are attracted. I think this pretty neatly explains the 'utilitarianism' demographics.
A population of TDT agents with different mostly-selfish preferences should end up with actions that closely resemble total utilitarianism for a fixed population, but oppose the adding of people at the subsistence level followed by major redistribution. (Or so it seems to me. And don't ask me what UDT would do.)
I have no interest in defending utilitarianism, but I do have an interest in a total welfare (yes I think such a concept can make sense) of sentient beings. The repugnance of the Repugnant Conclusion, I suggest, is a figment of your lack of imagination. When you imagine a universe with trillions of people whose lives are marginally worth living, you probably imagine people whose lives are a uniform grey, just barely closer to light than darkness. In other words, agonizingly boring lives. But this is unnecessary and prejudicial. Instead, imagine people...
In Austrian economics using the framework of Praxiology the claim is made that preferences (the rough equivalent of utilities) cannot be mapped to cardinal values but different states of the world are still well ordered by an individual's preferences such that one world state can be said to be more or less desirable than another world state. This makes it impossible to numerically compare the preferences of two individuals except through the pricing/exchange mechanism of economics. E.g. would 1 billion happy people exchange their own death for the existe...
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare). In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks. To avoid causing extra pain to those left behind, it is better that you kill off whole families and communities, so that no one is left to mourn the dead. In fact the most morally compelling act would be to kill off the whole of the human species, and replace it with a slightly larger population.
We have many real world analogues to this thought experiment. For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich). Of course, the rich world also produces most of the farming surplus and the technology innovation, which allow us to support a larger population. So we should aim to kill everyone in the rich world apart from farmers and scientists - and enough support staff to keep these professions running (Carl Shulman correctly points out that we may require most of the rest of the economy as "support staff". Still, it's very likely that we could kill off a significant segment of the population - those with the highest consumption relative to their impact of farming and science - and still "improve" the situation).
Even if turns out to be problematic to implement in practice, a true total utilitarian should be thinking: "I really, really wish there was a way to do targeted killing of many people in the USA, Europe and Japan, large parts of Asia and Latin America and some parts of Africa - it makes me sick to the stomach to think that I can't do that!" Or maybe: "I really really wish I could make everyone much poorer without affecting the size of the economy - I wake up at night with nightmare because these people remain above the poverty line!"
I won't belabour the point. I find those actions personally repellent, and I believe that nearly everyone finds them somewhat repellent or at least did so at some point in their past. This doesn't mean that it's the wrong thing to do - after all, the accepted answer to the torture vs dust speck dilemma feels intuitively wrong, at least the first time. It does mean, however, that there must be very strong countervailing arguments to balance out this initial repulsion (maybe even a mathematical theorem). For without that... how to justify all this killing?
Hence for the rest of this post, I'll be arguing that total utilitarianism is built on a foundation of dust, and thus provides no reason to go against your initial intuitive judgement in these problems. The points will be:
A utility function does not compel total (or average) utilitarianism
There are strong reasons to suspect that the best decision process is one that maximises expected utility for a particular utility function. Any process that does not do so, leaves itself open to be money pumped or taken advantage of. This point has been reiterated again and again on Less Wrong, and rightly so.
Your utility function must be over states of the universe - but that's the only restriction. The theorem says nothing further about the content of your utility function. If you prefer a world with a trillion ecstatic super-humans to one with a septillion subsistence farmers - or vice versa - then as long you maximise your expected utility, the money pumps can't touch you, and the standard Bayesian arguments don't influence you to change your mind. Your values are fully rigorous.
For instance, in the torture vs dust speck scenario, average utilitarianism also compels you to take torture, as do a host of other possible utility functions. A lot of arguments around this subject, that may implicitly feel to be in favour of total utilitarianism, turn out to be nothing of the sort. For instance, avoiding scope insensitivity does not compel you to total utilitarianism, and you can perfectly allow birth-death asymmetries or similar intuitions, while remaining an expected utility maximiser.
Total utilitarianism is not simple nor elegant, but is arbitrary
Total utilitarianism is defined as maximising the sum of everyone's individual utility function. That's a simple definition. But what are these individual utility functions? Do people act like expected utility maximisers? In a word... no. In another word... NO. In yet another word... NO!
So what are these utilities? Are they the utility that the individuals "should have"? According to what and who's criteria? Is it "welfare"? How is that defined? Is it happiness? Again, how is that defined? Is it preferences? On what scale? And what if the individual disagrees with the utility they are supposed to have? What if their revealed preferences are different again?
There are (various different) ways to start resolving these problems, and philosophers have spent a lot of ink and time doing so. The point remains that total utilitarianism cannot claim to be a simple theory, if the objects that it sums over are so poorly and controversially defined.
And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular IUC method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.
Why then is it so popular? Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers. It gives good predictions - but it remains a model, with a domain of validity. You wouldn't conclude from that economic model that, say, mental illnesses don't exist. Similarly, modelling each life as having the same value and maximising expected lives saved is sensible and intuitive in many scenarios - but not necessarily all.
Maybe if we had a bit more information about the affected populations, we could use a more sophisticated model, such as one incorporating quality adjusted life years (QALY). Or maybe we could let other factors affect our thinking - what if we had to choose between saving a population of 1000 versus a population of 1001, of same average QALYs, but where the first set contained the entire Awá tribe/culture of 300 people, and the second is made up of representatives from much larger ethnic groups, much more culturally replaceable? Should we let that influence our decision? Well maybe we should, maybe we shouldn't, but it would be wrong to say "well, I would really like to save the Awá, but the model I settled on earlier won't allow me to, so I best follow the model". The models are there precisely to model our moral intuitions (the clue is in the name), not freeze them.
The repugnant conclusion is at the end of a flimsy chain
There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this:
If all these steps increase the quality of the outcome (and it seems intuitively that they do), then the end state much be better than the starting state, agreeing with total utilitarianism. So, what could go wrong with this reasoning? Well, as seen before, the term "utility" is very much undefined, as is its scale - hence egalitarian is extremely undefined. So this argument is not mathematically precise, its rigour is illusionary. And when you recast the argument in qualitative terms, as you must, it become much weaker.
Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game - anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined "more equal" society? More to the point, would you be sure that when iterating this process billions of times, every redistribution will be an improvement? This is a conjunctive statement, so you have to be nearly entirely certain of every link in the chain, if you want to believe the outcome. And, to reiterate, these links cannot be reduced to simple mathematical statements - you have to be certain that each step is qualitatively better than the previous one.
And you also have to be certain that your theory does not allow path dependency. One can take the perfectly valid position that "If there were an existing poorer population, then the right thing to do would be to redistribute wealth, and thus lose the last copy of Akira. However, currently there is no existing poor population, hence I would oppose it coming into being, precisely because it would result in the lose of Akira." You can reject this type of reasoning, and a variety of others that block the repugnant conclusion at some stage of the chain (the Stanford Encyclopaedia of Philosophy has a good entry on the Repugnant Conclusion and the arguments surrounding it). But most reasons for doing so already pre-suppose total utilitarianism. In that case, you cannot use the above as an argument for your theory.
Hypothetical beings have hypothetical (and complicated) things say to you
There is another major strand of argument for total utilitarianism, which claims that we owe it to non-existent beings to satisfy their preferences, that they would prefer to exist rather than remain non-existent, and hence we should bring them into existence. How does this argument fare?
First of all, it should be emphasised that one is free to accept or reject that argument without any fear of inconsistency. If one maintains that never-existent beings have no relevant preferences, then one will never stumble over a problem. They don't exist, they can't make decisions, they can't contradict anything. In order to raise them to the point where their decisions are relevant, one has to raise them to existence, in reality or in simulation. By the time they can answer "would you like to exist?", they already do, so you are talking about whether or not to kill them, not whether or not to let them exist.
But secondly, it seems that the "non-existent beings" argument is often advanced for the sole purpose of arguing for total utilitarianism, rather than as a defensible position in its own right. Rarely are its implication analysed. What would a proper theory of non-existent beings look like?
Well, for a start the whole happiness/utility/preference problem comes back with extra sting. It's hard enough to make a utility function out of real world people, but how to do so with hypothetical people? Is it an essentially arbitrary process (dependent entirely on "which types of people we think of first"), or is it done properly, teasing out the "choices" and "life experiences" of the hypotheticals? In that last case, if we do it in too much detail, we could argue that we've already created the being in simulation, so it comes back to the death issue.
But imagine that we've somehow extracted a utility function from the preferences of non-existent beings. Apparently, they would prefer to exist rather than not exist. But is this true? There are many people in the world who would prefer not to commit suicide, but would not mind much if external events ended their lives - they cling to life as a habit. Presumably non-existent versions of them "would not mind" remaining non-existent.
Even for those that would prefer to exist, we can ask questions about the intensity of that desire, and how it compares with their other desires. For instance, among these hypothetical beings, some would be mothers of hypothetical infants, leaders of hypothetical religions, inmates of hypothetical prisons, and would only prefer to exist if they could bring/couldn't bring the rest of their hypothetical world with them. But this is ridiculous - we can't bring the hypothetical world with them, they would grow up in ours - so are we only really talking about the preferences of hypothetical babies, or hypothetical (and non-conscious) foetuses?
If we do look at adults, bracketing the issue above, then we get some that would prefer that they not exist, as long as certain others do - or conversely that they not exist, as long as others also not exist. How should we take that into account? Assuming the universe infinite, any hypothetical being would exist somewhere. Is mere existence enough, or do we have to have a large measure or density of existence? Do we need them to exist close to us? Are their own preferences relevant - ie we only have a duty to bring into the world, those beings that would desire to exist in multiple copies everywhere? Or do we feel these have already "enough existence" and select the under-counted beings? What if very few hypothetical beings are total utilitarians - is that relevant?
On a more personal note, every time we make a decision, we eliminate a particular being. We can not longer be the person who took the other job offer, or read the other book at that time and place. As these differences accumulate, we diverge quite a bit from what we could have been. When we do so, do we feel that we're killing off these extra hypothetical beings? Why not? Should we be compelled to lead double lives, assuming two (or more) completely separate identities, to increase the number of beings in the world? If not, why not?
These are some of the questions that a theory of non-existent beings would have to grapple with, before it can become an "obvious" argument for total utilitarianism.
Moral uncertainty: total utilitarianism doesn't win by default
An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc... may have some validity, total utilitarianism wins as the population increases because the others don't scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.
But this is the wrong way to compare competing moral theories. Just as different people's utilities don't have a common scale, different moral utilities don't have a common scale. For instance, would you say that square-total utilitarianism is certainly wrong? This theory is simply total utilitarianism further multiplied by the population; it would correspond roughly to the number of connections between people. Or what about exponential-square-total utilitarianism? This would correspond roughly to the set of possible connections between people. As long as we think that exponential-square-total utilitarianism is not certainly completely wrong, then the same argument as above would show it quickly dominating as population increases.
Or what about 3^^^3 average utilitarianism - which is simply average utilitarianism, multiplied by 3^^^3? Obviously that example is silly - we know that rescaling shouldn't change anything about the theory. But similarly, dividing total utilitarianism by 3^^^3 shouldn't change anything, so total utilitarianism's scaling advantage is illusory.
As mentioned before, comparing different utility functions is a hard and subtle process. One method that seems to have surprisingly nice properties (to such an extent that I recommend always using as a first try) is to normalise the lowest possible attainable utility to zero, the highest attainable utility to one, multiply by the weight you give to the theory, and then add the normalised utilities together.
For instance, assume you equally valued average utilitarianism and total utilitarianism, giving them both weights of one (and you had solved all the definitional problems above). Among the choices you were facing, the worst outcome for both theories is an empty world. The best outcome for average utilitarianism would be ten people with an average "utility" of 100. The best outcome for total utilitarianism would be a quadrillion people with an average "utility" of 1. Then how would either of those compare to ten trillion people with an average utility of 60? Well, the normalised utility of this for the average utilitarian is 0.6, while for the total utilitarian it's also 60/100=0.6, and 0.6+0.6=1.2. This is better that the utility for the small world (1+10-9) or the large world (0.01+1), so it beats either of the extremal choices.
Extending this method, we can bring in such theories as exponential-square-total utilitarianism (probably with small weights!), without needing to fear that it will swamp all other moral theories. And with this normalisation (or similar ones), even small weights to moral theories such as "culture has some intrinsic value" will often prevent total utilitarianism from walking away with all of the marbles.
(Population) ethics is still hard
What is the conclusion? At Less Wrong, we're used to realising that ethics is hard, that value is fragile, that there is no single easy moral theory to safely program the AI with. But it seemed for a while that population ethics might be different - that there may be natural and easy ways to determine what to do with large groups, even though we couldn't decide what to do with individuals. I've argued strongly here that it's not the case - that population ethics remain hard, that we have to figure out what theory we want to have without access to easy shortcuts.
But in another way it's liberating. To those who are mainly total utilitarians but internally doubt that a world with infinitely many barely happy people surrounded by nothing but "muzak and potatoes" is really among the best of the best - well, you don't have to convince yourself of that. You may choose to believe it, or you may choose not to. No voice in the sky or in the math will force you either way. You can start putting together a moral theory that incorporates all your moral intuitions - those that drove you to total utilitarianism, and those that don't quite fit in that framework.