In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare). In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks. To avoid causing extra pain to those left behind, it is better that you kill off whole families and communities, so that no one is left to mourn the dead. In fact the most morally compelling act would be to kill off the whole of the human species, and replace it with a slightly larger population.

We have many real world analogues to this thought experiment. For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich). Of course, the rich world also produces most of the farming surplus and the technology innovation, which allow us to support a larger population. So we should aim to kill everyone in the rich world apart from farmers and scientists - and enough support staff to keep these professions running (Carl Shulman correctly points out that we may require most of the rest of the economy as "support staff". Still, it's very likely that we could kill off a significant segment of the population - those with the highest consumption relative to their impact of farming and science - and still "improve" the situation).

Even if turns out to be problematic to implement in practice, a true total utilitarian should be thinking: "I really, really wish there was a way to do targeted killing of many people in the USA, Europe and Japan, large parts of Asia and Latin America and some parts of Africa - it makes me sick to the stomach to think that I can't do that!" Or maybe: "I really really wish I could make everyone much poorer without affecting the size of the economy - I wake up at night with nightmare because these people remain above the poverty line!"

I won't belabour the point. I find those actions personally repellent, and I believe that nearly everyone finds them somewhat repellent or at least did so at some point in their past. This doesn't mean that it's the wrong thing to do - after all, the accepted answer to the torture vs dust speck dilemma feels intuitively wrong, at least the first time. It does mean, however, that there must be very strong countervailing arguments to balance out this initial repulsion (maybe even a mathematical theorem). For without that... how to justify all this killing?

Hence for the rest of this post, I'll be arguing that total utilitarianism is built on a foundation of dust, and thus provides no reason to go against your initial intuitive judgement in these problems. The points will be:

  1. Bayesianism and the fact that you should follow a utility function in no way compel you towards total utilitarianism. The similarity in names does not mean the concepts are on similarly rigorous foundations.
  2. Total utilitarianism is neither a simple, nor an elegant theory. In fact, it is under-defined and arbitrary.
  3. The most compelling argument for total utilitarianism (basically the one that establishes the repugnant conclusion), is a very long chain of imperfect reasoning, so there is no reason for the conclusion to be solid.
  4. Considering the preferences of non-existent beings does not establish total utilitarianism.
  5. When considering competing moral theories, total utilitarianism does not "win by default" thanks to its large values as the population increases.
  6. Population ethics is hard, just as normal ethics is.

 

A utility function does not compel total (or average) utilitarianism

There are strong reasons to suspect that the best decision process is one that maximises expected utility for a particular utility function. Any process that does not do so, leaves itself open to be money pumped or taken advantage of. This point has been reiterated again and again on Less Wrong, and rightly so.

Your utility function must be over states of the universe - but that's the only restriction. The theorem says nothing further about the content of your utility function. If you prefer a world with a trillion ecstatic super-humans to one with a septillion subsistence farmers - or vice versa - then as long you maximise your expected utility, the money pumps can't touch you, and the standard Bayesian arguments don't influence you to change your mind. Your values are fully rigorous.

For instance, in the torture vs dust speck scenario, average utilitarianism also compels you to take torture, as do a host of other possible utility functions. A lot of arguments around this subject, that may implicitly feel to be in favour of total utilitarianism, turn out to be nothing of the sort. For instance, avoiding scope insensitivity does not compel you to total utilitarianism, and you can perfectly allow birth-death asymmetries or similar intuitions, while remaining an expected utility maximiser.

 

Total utilitarianism is not simple nor elegant, but is arbitrary

Total utilitarianism is defined as maximising the sum of everyone's individual utility function. That's a simple definition. But what are these individual utility functions? Do people act like expected utility maximisers? In a word... no. In another word... NO. In yet another word... NO!

So what are these utilities? Are they the utility that the individuals "should have"? According to what and who's criteria? Is it "welfare"? How is that defined? Is it happiness? Again, how is that defined? Is it preferences? On what scale? And what if the individual disagrees with the utility they are supposed to have? What if their revealed preferences are different again?

There are (various different) ways to start resolving these problems, and philosophers have spent a lot of ink and time doing so. The point remains that total utilitarianism cannot claim to be a simple theory, if the objects that it sums over are so poorly and controversially defined.

And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular IUC method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.

Why then is it so popular? Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers. It gives good predictions - but it remains a model, with a domain of validity. You wouldn't conclude from that economic model that, say, mental illnesses don't exist. Similarly, modelling each life as having the same value and maximising expected lives saved is sensible and intuitive in many scenarios - but not necessarily all.

Maybe if we had a bit more information about the affected populations, we could use a more sophisticated model, such as one incorporating quality adjusted life years (QALY). Or maybe we could let other factors affect our thinking - what if we had to choose between saving a population of 1000 versus a population of 1001, of same average QALYs, but where the first set contained the entire Awá tribe/culture of 300 people, and the second is made up of representatives from much larger ethnic groups, much more culturally replaceable? Should we let that influence our decision? Well maybe we should, maybe we shouldn't, but it would be wrong to say "well, I would really like to save the Awá, but the model I settled on earlier won't allow me to, so I best follow the model". The models are there precisely to model our moral intuitions (the clue is in the name), not freeze them.

 

The repugnant conclusion is at the end of a flimsy chain

There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this:

  1. Start with a population of very happy/utilitied/welfared/preference satisfied people.
  2. Add other people whose lives are worth living, but whose average "utility" is less than that of the initial population.
  3. Redistribute "utility" in an egalitarian way across the whole population, increasing the average a little as you do so (but making sure the top rank have their utility lowered).
  4. Repeat as often as required.
  5. End up with a huge population whose lives are barely worth living.

If all these steps increase the quality of the outcome (and it seems intuitively that they do), then the end state much be better than the starting state, agreeing with total utilitarianism. So, what could go wrong with this reasoning? Well, as seen before, the term "utility" is very much undefined, as is its scale - hence egalitarian is extremely undefined. So this argument is not mathematically precise, its rigour is illusionary. And when you recast the argument in qualitative terms, as you must, it become much weaker.

Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game - anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined "more equal" society? More to the point, would you be sure that when iterating this process billions of times, every redistribution will be an improvement? This is a conjunctive statement, so you have to be nearly entirely certain of every link in the chain, if you want to believe the outcome. And, to reiterate, these links cannot be reduced to simple mathematical statements - you have to be certain that each step is qualitatively better than the previous one.

And you also have to be certain that your theory does not allow path dependency. One can take the perfectly valid position that "If there were an existing poorer population, then the right thing to do would be to redistribute wealth, and thus lose the last copy of Akira. However, currently there is no existing poor population, hence I would oppose it coming into being, precisely because it would result in the lose of Akira." You can reject this type of reasoning, and a variety of others that block the repugnant conclusion at some stage of the chain (the Stanford Encyclopaedia of Philosophy has a good entry on the Repugnant Conclusion and the arguments surrounding it). But most reasons for doing so already pre-suppose total utilitarianism. In that case, you cannot use the above as an argument for your theory.

 

Hypothetical beings have hypothetical (and complicated) things say to you

There is another major strand of argument for total utilitarianism, which claims that we owe it to non-existent beings to satisfy their preferences, that they would prefer to exist rather than remain non-existent, and hence we should bring them into existence. How does this argument fare?

First of all, it should be emphasised that one is free to accept or reject that argument without any fear of inconsistency. If one maintains that never-existent beings have no relevant preferences, then one will never stumble over a problem. They don't exist, they can't make decisions, they can't contradict anything. In order to raise them to the point where their decisions are relevant, one has to raise them to existence, in reality or in simulation. By the time they can answer "would you like to exist?", they already do, so you are talking about whether or not to kill them, not whether or not to let them exist.

But secondly, it seems that the "non-existent beings" argument is often advanced for the sole purpose of arguing for total utilitarianism, rather than as a defensible position in its own right. Rarely are its implication analysed. What would a proper theory of non-existent beings look like?

Well, for a start the whole happiness/utility/preference problem comes back with extra sting. It's hard enough to make a utility function out of real world people, but how to do so with hypothetical people? Is it an essentially arbitrary process (dependent entirely on "which types of people we think of first"), or is it done properly, teasing out the "choices" and "life experiences" of the hypotheticals? In that last case, if we do it in too much detail, we could argue that we've already created the being in simulation, so it comes back to the death issue.

But imagine that we've somehow extracted a utility function from the preferences of non-existent beings. Apparently, they would prefer to exist rather than not exist. But is this true? There are many people in the world who would prefer not to commit suicide, but would not mind much if external events ended their lives - they cling to life as a habit. Presumably non-existent versions of them "would not mind" remaining non-existent.

Even for those that would prefer to exist, we can ask questions about the intensity of that desire, and how it compares with their other desires. For instance, among these hypothetical beings, some would be mothers of hypothetical infants, leaders of hypothetical religions, inmates of hypothetical prisons, and would only prefer to exist if they could bring/couldn't bring the rest of their hypothetical world with them. But this is ridiculous - we can't bring the hypothetical world with them, they would grow up in ours - so are we only really talking about the preferences of hypothetical babies, or hypothetical (and non-conscious) foetuses?

If we do look at adults, bracketing the issue above, then we get some that would prefer that they not exist, as long as certain others do - or conversely that they not exist, as long as others also not exist. How should we take that into account? Assuming the universe infinite, any hypothetical being would exist somewhere. Is mere existence enough, or do we have to have a large measure or density of existence? Do we need them to exist close to us? Are their own preferences relevant - ie we only have a duty to bring into the world, those beings that would desire to exist in multiple copies everywhere? Or do we feel these have already "enough existence" and select the under-counted beings? What if very few hypothetical beings are total utilitarians - is that relevant?

On a more personal note, every time we make a decision, we eliminate a particular being. We can not longer be the person who took the other job offer, or read the other book at that time and place. As these differences accumulate, we diverge quite a bit from what we could have been. When we do so, do we feel that we're killing off these extra hypothetical beings? Why not? Should we be compelled to lead double lives, assuming two (or more) completely separate identities, to increase the number of beings in the world? If not, why not?

These are some of the questions that a theory of non-existent beings would have to grapple with, before it can become an "obvious" argument for total utilitarianism.

 

Moral uncertainty: total utilitarianism doesn't win by default

An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc... may have some validity, total utilitarianism wins as the population increases because the others don't scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.

But this is the wrong way to compare competing moral theories. Just as different people's utilities don't have a common scale, different moral utilities don't have a common scale. For instance, would you say that square-total utilitarianism is certainly wrong? This theory is simply total utilitarianism further multiplied by the population; it would correspond roughly to the number of connections between people. Or what about exponential-square-total utilitarianism? This would correspond roughly to the set of possible connections between people. As long as we think that exponential-square-total utilitarianism is not certainly completely wrong, then the same argument as above would show it quickly dominating as population increases.

Or what about 3^^^3 average utilitarianism - which is simply average utilitarianism, multiplied by 3^^^3? Obviously that example is silly - we know that rescaling shouldn't change anything about the theory. But similarly, dividing total utilitarianism by 3^^^3 shouldn't change anything, so total utilitarianism's scaling advantage is illusory.

As mentioned before, comparing different utility functions is a hard and subtle process. One method that seems to have surprisingly nice properties (to such an extent that I recommend always using as a first try) is to normalise the lowest possible attainable utility to zero, the highest attainable utility to one, multiply by the weight you give to the theory, and then add the normalised utilities together.

For instance, assume you equally valued average utilitarianism and total utilitarianism, giving them both weights of one (and you had solved all the definitional problems above). Among the choices you were facing, the worst outcome for both theories is an empty world. The best outcome for average utilitarianism would be ten people with an average "utility" of 100. The best outcome for total utilitarianism would be a quadrillion people with an average "utility" of 1. Then how would either of those compare to ten trillion people with an average utility of 60? Well, the normalised utility of this for the average utilitarian is 0.6, while for the total utilitarian it's also 60/100=0.6, and 0.6+0.6=1.2. This is better that the utility for the small world (1+10-9) or the large world (0.01+1), so it beats either of the extremal choices.

Extending this method, we can bring in such theories as exponential-square-total utilitarianism (probably with small weights!), without needing to fear that it will swamp all other moral theories. And with this normalisation (or similar ones), even small weights to moral theories such as "culture has some intrinsic value" will often prevent total utilitarianism from walking away with all of the marbles.

 

(Population) ethics is still hard

What is the conclusion? At Less Wrong, we're used to realising that ethics is hard, that value is fragile, that there is no single easy moral theory to safely program the AI with. But it seemed for a while that population ethics might be different - that there may be natural and easy ways to determine what to do with large groups, even though we couldn't decide what to do with individuals. I've argued strongly here that it's not the case - that population ethics remain hard, that we have to figure out what theory we want to have without access to easy shortcuts.

But in another way it's liberating. To those who are mainly total utilitarians but internally doubt that a world with infinitely many barely happy people surrounded by nothing but "muzak and potatoes" is really among the best of the best - well, you don't have to convince yourself of that. You may choose to believe it, or you may choose not to. No voice in the sky or in the math will force you either way. You can start putting together a moral theory that incorporates all your moral intuitions - those that drove you to total utilitarianism, and those that don't quite fit in that framework.

New Comment
237 comments, sorted by Click to highlight new comments since: Today at 9:41 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich).

This empirical claim seems ludicrously wrong, which I find distracting from the ethical claims. Most people in rich countries (except for those unable or unwilling to work or produce kids who will) are increasing the rate of technological advance by creating demand for improved versions of products, paying taxes, contributing to the above-average local political cultures, and similar. Such advance dominates resource consumption in affecting the welfare of the global poor (and long-term welfare of future people). They make charitable donations or buy products that enrich people like Bill Gates and Warren Buffett who make highly effective donations, and pay taxes for international aid.

The scientists and farmers use thousands of products and infrastructure provided by the rest of society, and this neglects industry, resource extract... (read more)

7Lukas_Gloor12y
Whatever the piece assumes, I don't think it's preference utilitarianism because then the first sentence doesn't make sense: Assuming most people have a preference to go on living, as well as various other preferences for the future, then killing them would violate all these preferences, and simply creating a new, equally happy being would still leave you with less overall utility, because all the unsatisfied preferences count negatively. (Or is there a version of preference utilitarianism where unsatisfied preferences don't count negatively?) The being would have to be substantially happier, or you'd need a lot more beings to make up for the unsatisfied preferences caused by the killing. Unless we're talking about beings that live "in the moment", where their preferences correspond to momentary hedonism. Peter Singer wrote a chapter on killing and replaceability in Practical Ethics. His view is prior-existence, not total preference utilitarianism, but the points on replaceability apply to both.
0Stuart_Armstrong12y
Will add a link. But I haven't yet seen my particular angle of attack on the repugnant conclusion, and it isn't in the Stanford Encyclopaedia. The existence/non-existence seems to have more study, though.

There is no natural scale on which to compare utility functions. [...] Unless your theory comes with a particular [interpersonal utility comparison] method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.

This, in my opinion, is by itself a decisive argument against utilitarianism. Without these ghostly "utilities" that are supposed to be measurable and comparable interpersonally, the whole concept doesn't even being to make sense. And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.

Note that the problem is much more fundamental than just the mathematical difficulties and counter-intuitive implications of formal utilitarian theories. Even if there were no such problems, it would still be the case that the whole theory rests on an entirely imaginary foundation. Ultimately, it's a system that postulates some metaphysical entities and a categorical moral imperative stated in terms of the supposed state of these entitie... (read more)

3Jayson_Virissimo12y
I asked about this before in the context of one of Julia Galef's posts about utilitarian puzzles and received several responses. What is your evaluation of the responses (personally, I was very underwhelmed)?
3Vladimir_M12y
The only reasonable attempt at a response in that sub-thread is this comment. I don't think the argument works, though. The problem is not just disagreement between different people's intuitions, but also the fact that humans don't do anything like utility comparisons when it comes to decisions that affect other people. What people do in reality is intuitive folk ethics, which is basically virtue ethics, and has very little concern with utility comparisons. That said, there are indeed some intuitions about utility comparison, but they are far too weak, underspecified, and inconsistent to serve as basis for extracting an interpersonal utility function, even if we ignore disagreements between people.
-2Will_Sawin12y
Intuitive utilitarian ethics are very helpful in everyday life.
3Salemicus12y
There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career... familiar kind of moral dilemma. Asking his colleague for advice, he got told "Just maximise total utility." "Come on," he is supposed to have replied, "this is serious!" I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide a practical framework for addressing the problem, let alone a potential answer.
4gwern12y
Sauce: http://lesswrong.com/lw/890/rationality_quotes_november_2011/5aq7
3Will_Newsome12y
That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.
0Will_Sawin12y
Writing out costs and benefits is a technique that is sometimes helpful.
0Salemicus12y
Sure, but "costs" and "benefits" are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden. In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don't necessarily get much value (doesn't really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.
0Will_Sawin12y
Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.
2Salemicus12y
No, because "costs" and "benefits" are value-laden terms. Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city's ruler has forbidden it. So I take your advice and write down costs and benefits. Costs - breaching my duty to obey the law, punishment for me, possible reigniting of the city's civil war. Benefits - upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven't committed to any ethical system, all I've done is clarify what's at stake. For example, if I'm a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I'm a virtue ethicist, perhaps this shows it's about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I'm just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I'll experience knowing my brother's corpse is being eaten by crows? Ironically, the only person this doesn't help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits - "maximise utility" is a slogan, not a procedure.
7Will_Sawin12y
What are you arguing here? First you argue that "just maximize utility" is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics. Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one's attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant. Are you arguing anything else?
3Vladimir_M12y
Could you provide some concrete examples?
6Will_Sawin12y
I am thinking about petty personal disputes, say if one person finds something that another person does annoying. A common gut reaction is to immediately start staking territory about what is just and what is virtuous and so on, while the correct thing to do is focus on concrete benefits and costs of actions. The main reason this is better is not because it maximizes utility but because it minimizes argumentativeness. Another good example is competition for a resource. Sometimes one feels like one deserves a fair share and this is very important, but if you have no special need for it, nor are there significant diminishing marginal returns, then it's really not that big of a deal. In general, intuitive deontological tendencies can be jerks sometimes, and utilitarianism fights that.
0A1987dM12y
http://lesswrong.com/lw/b4f/sotw_check_consequentialism/
0Viliam_Bur12y
Thanks for the link, I am very underwhelmed too. If I understand it correctly, one suggestion is equivalent to choosing some X, and re-scaling everyone's utility function so that X has value 1. Obvious problem is the arbitrary choice of X, and the fact that in some people's original scale X may have positive, negative, or zero value. The other suggestion is equivalent to choosing a hypothetical person P with infinite empathy towards all people, and using the utility function of P as absolute utility. I am not sure about this, but seems to me that the result depends on P's own preferences, and this cannot be fixed because without preferences there could be no empathy.
2private_messaging12y
Yes. To be honest it looks like local version of reductionism takes the 'everything is reducible' in declarative sense, declaring that concepts it uses are reducible regardless of their reducibility.
3David_Gerard12y
Greedy reductionism.
-1private_messaging12y
Thanks! That's spot on. It's what I think much of those 'utility functions' here are. Number of paperclips in the universe, too. Haven't seen anything like that reduced to formal definition of any kind. The way humans actually decide on actions, is by evaluating the world-difference that the action causes in world-model, everything being very partial depending to available time. The probabilities are rarely possible to employ in the world model because of the combinatorial space exploding real hard. (also, Bayesian propagation on arbitrary graphs is np-complete, in very practical way of being computationally expensive). Hence there isn't some utility function deep inside governing the choices. Doing the best is mostly about putting limited computing time to best use. Then there's some odd use of abstractions - like, every agent can be represented with utility function therefore whatever we talk about utilities is relevant. Never mind that this utility function is trivial 1 for doing what agent chooses 0 otherwise and everything just gets tautological.
1Ghatanathoah11y
I wonder if I am misunderstanding what you are asking, because interpersonal utility comparison seems like an easy thing that people do every day, using our inborn systems for sympathy and empathy. When I am trying to make a decision that involves the conflicting desires of myself and another person; I generally use empathy to put myself in their shoes and try to think about desires that I have that are probably similar to theirs. Then I compare how strong those two desires of mine are and base my decision on that. Now, obviously I don't make all ethical decisions like that, there are many where I just follow common rules of thumb. But I do make some decisions in this fashion, and it seems quite workable, the more fair-minded of my acquaintances don't really complain about it unless they think I've made a mistake. Obviously it has scaling problems when attempting to base any type of utilitarian ethics on it, but I don't think they are insurmountable. Now, of course you could object that this method is unreliable, and ask whether I really know for sure if other people's desires are that similar to mine. But this seems to me to just be a variant of the age-old problem of skepticism and doesn't really deserve any more attention than the possibility that all the people I meet are illusions created by an evil demon. It's infinitesimally possible that everyone I know doesn't really have mental states similar to mine at all, that in fact they are all really robot drones controlled by a non-conscious AI that is basing their behavior on a giant lookup table. But it seems much more likely that other people are conscious human beings with mental states similar to mine that can be modeled and compared via empathy, and that this allows me to compare their utilities. In fact, it's hard to understand how empathy and sympathy could have evolved if it they weren't reasonably good at interpersonal utility comparison. If interpersonal utility comparison was truly impossible then an
0Lukas_Gloor12y
You mean against preference-utilitarianism. The vast majority of utilitarians I know are hedonistic utilitarians, where this criticism doesn't apply at all. (For some reason LW seems to be totally focused on preference-utilitarianism, as I've noticed by now.) As for the criticism itself: I agree! Preference-utiltiarians can come up with sensible estimates and intuitive judgements, but when you actually try to show that in theory there is one right answer, you just find a huge mess.
8Jayson_Virissimo12y
I agree. I'm fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons and that hedonic utilitarianism can escape the conceptual problems inherent in preference utilitarianism. On the other hand, I do not want to maximize (my) hedons (for these kinds of reasons, among others).
3CarlShulman12y
Err...what? Technology will tell you things about how brains (and computer programs) vary, but not which differences to count as "more pleasure" or "less pleasure." If evaluations of pleasure happen over 10x as many neurons is there 10x as much pleasure? Or is it the causal-functional role pleasure plays in determining the behavior of a body? What if we connect many brains or programs to different sorts of virtual bodies? Probabilistically? A rule to get a cardinal measure of pleasure across brains is going to require almost as much specification as a broader preference measure. Dualists can think of this as guesstimating "psychophysical laws" and physicalists can think of it as seeking reflective equilibrium in our stances towards different physical systems, but it's not going to be "read out" of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.
0torekp12y
Sure, but I don't think we can predict that there will be a lot of room for deciding those philosophy of mind questions whichever way one wants to. One simply has to wait for the research results to come in. With more data to constrain the interpretations, the number and spread of plausible stable reflective equilibria might be very small. I agree with Jayson that it is not mandatory or wise to maximize hedons. And that is because hedons are not the only valuable things. But they do constitute one valuable category. And in seeking them, the total utilitarians are closer to the right approach than the average utilitarians (I will argue in a separate reply).
1David_Gerard12y
OK, I've got to ask: what's your confidence based in, in detail? It's not clear to me that "sum hedons" even means anything.
0Vladimir_M12y
Why do you believe that interpersonal comparison of pleasure is straightforward? To me this doesn't seem to be the case.
4Lukas_Gloor12y
Is intrapersonal comparison possible? Personal boundaries don't matter for hedonistic utilitarianism, they only matter insofar as you may have spatio-temporally connected clusters of hedons (lives). The difficulties in comparison seem to be of an empirical nature, not a fundamental one (unlike the problems with preference-utilitarianism). If we had a good enough theory of consciousness, we could quantitatively describe the possible states of consciousness and their hedonic tones. Or not? One common argument against hedonistic utiltiarianism is that there are "different kinds of pleasures", and that they are "incommensurable". But if that we're the case, it would be irrational to accept a trade-off of the lowest pleasure of one sort for the highest pleasure of another sort, and no one would actually claim that. So even if pleasures "differ in kind", there'd be an empirical trade-off value based on how pleasant the hedonic states actually are.
1Mark_Lu12y
Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we'd be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.
4Vladimir_M12y
You make it sound as if there is some signal or register in the brain whose value represents "pleasure" in a straightforward way. To me it seems much more plausible that "pleasure" reduces to a multitude of variables that can't be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared. That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.
2Lukas_Gloor12y
Or the utilitronium shockwave, rather. Which doesn't even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I've come to think of it as a perfectly reasonable thing.
1TheOtherDave12y
AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading. Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..) I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.
-1shminux12y
Hedonistic utilitarianism ("what matters is the aggregate happiness") runs into the same repugnant conclusion.
0Lightwave12y
But this happens exactly because interpersonal (hedonistic) utility comparison is possible.
0shminux12y
Right, if you cannot compare utilities, you are safe from the repugnant conclusion. On the other hand, this is not very useful instrumentally, as a functioning society necessarily requires arbitration of individual wants. Thus some utilities must be comparable, even if others might not be. Finding a boundary between the two runs into the standard problem of two nearly identical preferences being qualitatively different.
-2Lukas_Gloor12y
Yes but it doesn't have the problem Vladimir_M described above, and it can bite the bullet in the repugnant conclusion by appealing to personal identity being an illusion. Total hedonistic utilitarianism is quite hard to argue against, actually.
3shminux12y
As I mentioned in the other reply, I'm not sure how a society of total hedonistic utilitarians would function without running into the issue of nearly identical but incommensurate preferences.
0Lukas_Gloor12y
Hedonistic utilitarianism is not about preferences at all. It's about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.
2shminux12y
Maybe I misunderstand how total hedonistic utilitarianism works. Don't you ever construct an aggregate utility function?
0Lukas_Gloor12y
No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that's preference utilitarianism or something else entirely.
2shminux12y
How is that not an aggregate utility function?
2Lukas_Gloor12y
Utilons aren't hedons. You have one simple utility function that states you should maximize happiness minus suffering. That's similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.
1David_Gerard12y
You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?
-2Lukas_Gloor12y
I don't claim that I, or anyone else, can do that right now. I'm saying there doesn't seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever? As for (b), I don't even see the problem. If (a) works, then you just do simple math after that. In case you're worried about torture and dust specks not working out, check out section VI of this paper. And regarding (a), here's an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.
6TheOtherDave12y
I can't speak for David (or, well, I can't speak for that David), but for my own part, I'm willing to accept for the sake of argument that the happiness/suffering/whatever of individual minds is intersubjectively commensurable, just like I'm willing to accept for the sake of argument that people have "terminal values" which express what they really value, or that there exist "utilons" that are consistently evaluated across all situations, or a variety of other claims, despite having no evidence that any such things actually exist. I'm also willing to assume spherical cows, frictionless pulleys, and perfect vacuums for the sake of argument. But the thing about accepting a claim for the sake of argument is that the argument I'm accepting it for the sake of has to have some payoff that makes accepting it worthwhile. As far as I can tell, the only payoff here is that it lets us conclude "hedonic utilitarianism is better than all other moral philosophies." To me, that payoff doesn't seem worth the bullet you're biting by assuming the existence of intersubjectively commensurable hedons. If someone were to demonstrate a scanning device whose output could be used to calculate a "hedonic score" for a given brain across a wide range of real-world brains and brainstates without first being calibrated against that brain's reference class, and that hedonic score could be used to reliably predict the self-reports of that brain's happiness in a given moment, I would be surprised and would change my mind about both the degree of variation of cognitive experience and the viability of intersubjectively commensurable hedons. If you're claiming this has actually been demonstrated, I'd love to see the study; everything I've ever read about has been significantly narrower than that. If you're merely claiming that it's in principle possible that we live in a world where this could be demonstrated, I agree that it's in principle possible, but see no particular evidence to support the c
-1David_Gerard12y
Well, yes. The main attraction of utilitarianism appears to be that it makes the calculation of what to do easier. But its assumptions appear ungrounded.
-1David_Gerard12y
But what makes you think you can just do simple math on the results? And which simple math - addition, adding the logarithms, taking the average or what? What adds up to normality?
0shminux12y
Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren't, why isn't a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I'll look through the logic again.
0CarlShulman12y
Hedonism doesn't specify what sorts of brain states and physical objects have how much pleasure. There are a bewildering variety of choices to be made in cashing out a rule to classify which systems are how "happy." Just to get started, how much pleasure is there when a computer running simulations of happy human brains is sliced in the ways discussed in this paper?
2Lukas_Gloor12y
But aren't those empirical difficulties, not fundamental ones? Don't you think there's a fact of the matter that will be discovered if we keep gaining more and more knowledge? Empirical problems can't bring down an ethical theory, but if you can show that there exists a fundamental weighting problem, then that would be valid criticism.
7CarlShulman12y
What sort of empirical fact would you discover that would resolve that? A detector for happiness radiation? The scenario in that paper is pretty well specified.

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness.

I stopped reading here. To me, "total utilitarianism" means maximizing the sum of the values of individual lives. There's nothing forcing a total utilitarian to value a life by adding the happiness experienced in each moment of the life, without further regard to how the moments fit together (e.g. whether they fulfill someone's age-old hopes).

In general, people seem to mean different things by "utilitarianism", so any criticism needs to spell out what version of utilitarianism it's attacking, and acknowledge that the particular version of utilitarianism may not include everyone who self-identifies as a utilitarian.

-1Lukas_Gloor12y
But isn't the "values of individual lives" preference-utilitarianism (which often comes as prior-existence instead of "total")? I'm confused, it seems like there are several definitions criculating. I haven't encountered this kind of total utilitarianism on the felicifia utilitarianism forum. The quoted conclusion about killing people and replacing people is accurate, according the definition that is familiar to me.
0steven046112y
Not unless the value of a life is proportional to the extent to which the person's preferences are satisfied. What would you call the view I mentioned, if not total utilitarianism?
0Lukas_Gloor12y
Sounds like total preference-utiltiarianism, instead of total hedonistic utilitarianism. Would this view imply that it is good to create beings whose preferences are satisfied? If yes, then it's total PU. If no, then it might be prior-existence PU. The original article doesn't specify explicitly whether it means hedonistic or preference utilitarianism, but the example given about killing only works for hedonistic utilitarianism, that's why I assumed that this is what's meant. However, somewhere else in the article, it says And that seems more like preference-utilitarianism again. So something doesn't work out here. As a side note, I've actually never encountered a total preference-utilitarian, only prior-existence ones (like Peter Singer). But it's a consistent position.
3steven046112y
But it's not preference utilitarianism. In evaluating whether someone leads a good life, I care about whether they're happy, and I care about whether their preferences are satisfied, but those aren't the only things I care about. For example, I might think it's a bad thing if a person lives the same day over and over again, even it's what the person wants and it makes the person happy. (Of course, it's a small step from there to concluding it's a bad idea when different people have the same experiences, and that sort of value is hard to incorporate into any total utilitarian framework.)
3Will_Newsome12y
I think you might want to not call your ethical theory utilitarianism. Aquinas' ethics also emphasize the importance of the common welfare and loving thy neighbor as thyself, yet AFAIK no one calls his ethics utilitarian.
3steven046112y
I think maybe the purest statement of utilitarianism is that it pursues "the greatest good for the greatest number". The word "for" is important here. Something that improves your quality of life is good for you. Clippy might think (issues of rigid designators in metaethics aside) that paperclips are good without having a concept of whether they're good for anyone, so he's a consequentialist but not a utilitarian. An egoist has a concept of things being good for people, but chooses only those things that are good for himself, not for the greatest number; so an egoist is also a consequentialist but not a utilitarian. But there's a pretty wide range of possible concepts of what's good for an individual, and I think that entire range should be compatible with the term "utilitarian".
3steven046112y
It doesn't make sense to me to count maximization of total X as "utilitarianism" if X is pleasure or if X is preference satisfaction but not if X is some other measure of quality of life. It doesn't seem like that would cut reality at the joints. I don't necessarily hold the position I described, but I think most criticisms of it are misguided, and it's natural enough to deserve a short name.
1Lukas_Gloor12y
I see, interesting. That means you bring in a notion independent of both the person's experiences and preferences. You bring in a particular view on value (e.g. that life shouldn't be repetitious). I'd just call this a consequentialist theory where the exact values would have to be specified in the description, instead of utilitarianism. But that's just semantics, as you said initially, it's important that we specify what exactly we're talking about.

A utility function does not compel total (or average) utilitarianism

Does anyone actually think this? Thinking that utility functions are the right way to talk about rationality !=> utilitarianism. Or any moral theory, as far as I can tell. I don't think I've seen anyone on LW actually arguing that implication, although I think most would affirm the antecedent.

There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this... If all these steps increase the qualit

... (read more)

What seems to be overlooked in most discussions about total hedonistic utiltiarianism is that the proponents often have a specific (Parfitean) view about personal identity. Which leads to either empty or open individualism. Based on that, they may hold that it is no more rational to care about one's own future self than it is to care about any other future self. "Killing" a being would then just be failing to let a new moment of consciousness come into existence. And any notions of "preferences" would not really make sense anymore, only instrumentally.

2Kaj_Sotala12y
I'm increasingly coming to hold this view, where the amount and quality of experience-moments is all that matters, and I'm glad to see someone else spell it out.

A smaller critique of total utilitarianism:

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

You can just finish there.

(In case the "sufficient cause to reject total utilitarianism" isn't clear: I don't like murder. Total utilitarianism advocates it in all sorts of scenarios that I would not. Therefore, total utilitarianism is Evil.)

6Stuart_Armstrong12y
:-) I kinda did. The rest was just "there are no strong countervailing reasons to reject that intuition".
0wedrifid12y
Excellent post then. I kind of stopped after the first line so I'll take your word for the rest!
1private_messaging12y
Agreed completely. This goes for any utilitarianism where the worth of changing from state A to state B is f(B)-f(A) . Morality is about transitions; even hedonism is, as happiness is nothing if it is frozen solid.
2A1987dM12y
I'd take A and B in the equation above to include momentums as well as positions? :-)
0private_messaging12y
That's a good escape but only for specific laws of physics... what do you do about brain sim on computer? It has multiple CPUs going over calculating next state from current state in parallel, and it doesn't care about how CPU is physically implemented, but it does care how many experience-steps it has. edit: i.e. i mean, transition from one happy state to other happy state that is equally a happy state is what a moment of being happy is about. the total utilitarianism boils down to zero utility of an update pass on a happy brain sim. It's completely broken. edit: and with simple workarounds, it boils down to zero utility of switching the current/next state arrays, so that you sit in a loop recalculating same next state from static current state.

Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers.

I think you actually slightly understate the case against Utilitarianism. Yes, Classical Economics uses expected utility maximisers - but it prefers to deal with Pareto Improvements (or Kaldor-Hicks improvements) than try to do inter-personal utility comparisons.

Total utilitarianism is defined as maximising the sum of everyone's individual utility function.

That seems misleading. Most of the time "total utiltiarianism" refers to what should actually be called "hedonistic total utilitarianism". And what is maximized there is the suprlus of happiness over suffering (positive hedonic states over negative ones), which isn't necessarily synonymous with individual utility functions.

There are three different parameters for the various kinds of utilitarianism: It can either be total or average or pr... (read more)

7endoself12y
There exist people who profess that they would choose to be tortured for the rest of their lives with no chance of happiness rather than being killed instantly, so this intuition could be more than theoretically possible. People tend to be surprised by the extent to which intuitions differ.

Upvoted, but as someone who, without quite being a total utilitarian, at least hopes someone might be able to rescue total utilitarianism, I don't find much to disagree with here. Points 1, 4, 5, and 6 are arguments against certain claims that total utilitarianism should be obviously true, but not arguments that it doesn't happen to be true.

Point 2 states that total utilitarianism won't magically implement itself and requires "technology" rather than philosophy; that is, people have to come up with specific contingent techniques of estimating ut... (read more)

9Lukas_Gloor12y
That's Peter Singer's view, prior-existence instead of total. A problem here seems to be that creating a being in intense suffering would be ethically neutral, and if even the slightest preference for doing so exists, and if there were no resource trade-offs in regard to other preferences, then creating that miserable being would be the right thing to do. One can argue that in the first millisecond after creating the miserable being, one would be obliged to kill it, and that, foreseeing this, one ought not have created it in the first place. But that seems not very elegant. And one could further imagine creating the being somewhere unreachable, where it's impossible to kill it afterwards. One can avoid this conclusion by axiomatically stating that it is bad to bring into existence a being with a "life not worth living". But that still leaves problems, for one thing, it seems ad hoc, and for another, it would then not matter whether one brings a happy child into existence or one with a neutral life, and that again seems highly counterintuitive. The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You'd end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence. Depending on how much pre-existing beings want to have children, it wouldn't necessarily entail complete anti-natalism, but the overall goal would at some point be a universe without unsatisfied preferences. Or is there another way out?
6Ghatanathoah11y
A potential major problem with this approach has occurred to me, namely, the fact that people tend to have infinite or near infinite preferences. We always want more. I don't see anything wrong with that, but it does create headaches for the ethical system under discussion. The human race's insatiable desires makes negative total preference-utilitarianism vulnerable to an interesting variant of the various problems of infinity in ethics. Once you've created a person, who then dies, it is impossible to do any more harm. There's already an infinite amount of unsatisfied preferences in the world from their existence and death. Creating more people will result in the same total amount of unsatisfied preferences as before: infinity. This would render negative utilitarianism as always indifferent to whether one should create more people, obviously not what we want. Even if you posit that our preferences are not infinite, but merely very large, this still runs into problems. I think most people, even anti-natalists, would agree that it is sometimes acceptable to create a new person in order to prevent the suffering of existing people. For instance, I think even an antinatalist would be willing to create one person who will live a life with what an upper-class 21st Century American would consider a "normal" amount of suffering, if doing so would prevent 7 billion people from being tortured for 50 years. But if you posit that the new person has a very large, but not infinite amount of preferences (say, a googol) then it's still possible for the badness of creating them to outweigh the torture of all those people. Again, not what we want. Hedonic negative utilitarianism doesn't have this problem, but it's even worse, it implies we should painlessly kill everyone ASAP! Since most antinatalists I know believe death to be a negative thing, rather than a neutral thing, they must be at least partial preference utilitarians. Now, I'm sure that negative utilitarians have some wa
1Mark_Lu12y
Well don't existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?
3Lukas_Gloor12y
Sure, existing people tend to have such preferences. But hypothetically it's possible that they didn't, and the mere possibility is enough to bring down an ethical theory if you can show that it would generate absurd results.
0Mark_Lu12y
This might be one reason why Eliezer talks about morality as a fixed computation. P.S. Also, doesn't the being itself have a preference for not-suffering?
0Ghatanathoah12y
One possibility might be phrasing it as "Maximize preference satisfaction for everyone who exists and ever will exist, but not for everyone who could possibly exist.." This captures the intuition that it is bad to create people who have low levels of preference satisfaction, even if they don't exist yet and hence can't object to being created, while preserving the belief that existing people have a right to not create new people whose existence would seriously interfere with their desires. It does this without implying anti-natalism. I admit that the phrasing is a little clunky and needs refinement, and I'm sure a clever enough UFAI could find some way to screw it up, but I think it's a big step towards resolving the issues you point out. EDIT: Another possibility that I thought of is setting "creating new worthwhile lives" and "improving already worthwhile lives" as two separate values that have diminishing returns relative to each other. This is still vulnerable to some forms of repugant-conclusion type arguments, but it totally eliminates what I think is the most repugnant aspect of the RC - the idea that a Malthusian society might be morally optimal.
0Scott Alexander12y
Thank you. Apparently total utilitarianism really is scary, and I had routed around it by replacing it with something more useable and assuming that was what everyone else meant when they said "total utilitarianism".
2Stuart_Armstrong12y
Yes, yes, much progress can (and will) be made fomalising our intuitions. But we don't need to assume ahead of time that the progress will take the form of "better individual utilities and definition of summation" rather than "other ways of doing population ethics". Yes, the act is not morally neutral in preference utilitarianism. In those cases, we'd have to talk about how many people we'd have to create with satisficiable preferences, to compensate for that one death. You might not give credit for creating potential people, but preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour. This is not preference total utilitarianism. It's something like "satisfying the maximal amount of preferences of currently existing people". In fact, it's closer to preference average utilitarianism (satisfy the current majority preference) that to total utilitarianism (probably not exactly that either; maybe a little more path dependency). Most reasons for rejecting the reasoning that blocks the repugnant conclusion pre-suppose total utiltarianism. Without the double negative: most justifications of the repugnant conclusion pre-suppose total utilitarianism.
4Mark_Lu12y
Shouldn't we then just create people with simpler and easier to satisfy preferences so that there's more preference-satisfying in the world?
3Lukas_Gloor12y
Indeed, that's a very counterintuitive conclusion. It's the reason why most preference-utilitarians I know hold a prior-existence view.
1[anonymous]12y
Even in hedonistic utilitarianism, it is an almost misleading simplification. There are crucial differences between killing a person and not birthing a new one: Most importantly, one is seen as breaking the social covenant of non-violence, while the other is not. One disrupts pre-existing social networks, the other does not. One destroys an experienced educated brain, the other does not. Endorsing one causes social distrust and strife in ways the other does not. A better claim might be: It is morally neutral in hedonistic utilitarianism to create a perfect copy of a person and painlessly and unexpectedly destroy the original. It's a more accurate claim, and I personally would accept it.
0Ghatanathoah10y
These are all practical considerations. Most people believe it is wrong in principle to kill someone and replace them with a being of comparable happiness. You don't see people going around saying: "Look at that moderately happy person. It sure is too bad that it's impractical to kill them and replace them with a slightly happier person. The world would be a lot better if that were possible." I also doubt that an aversion to violence is what prevents people from endorsing replacement either. You don't see people going around saying: "Man, I sure wish that person would get killed in a tornado or a car accident. Then I could replace them without breaking any social covenants." I believe that people reject replacement because they see it as a bad consequence, not because of any practical or deontological considerations. I wholeheartedly endorse such a rejection. The reason that that claim seems acceptable is because, under many understandings of how personal identity works, if a copy of someone exists, they aren't really dead. You killed a piece of them, but there's still another piece left alive. As long as your memories, personality, and values continue to exist you still live. The OP makes it clear that what they mean is that total utilitarianism (hedonic and otherwise) maintains that it is morally neutral to kill someone and replace them with a completely different person who has totally different memories, personality, and values, providing the second person is of comparable happiness to the first. I believe any moral theory that produces this result ought to be rejected.

This would deserve to be on the front page.

5A1987dM12y
I agree. ETA: Also, I expected a post with “(small)” in its title to be much shorter. :-)
2Stuart_Armstrong12y
Well, it did start shorter, then more details just added themselves. Nothing to do with me! :-)
2Stuart_Armstrong12y
Cheers, will move it.

There's how I see this issue (from philosophical point of view):

Moral value is, in the most general form, a function of a state of a structure, for lack of better word. The structure may be just 10 neurons in isolation, for which the moral worth may well be exact zero, or it may be 7 billion blobs of about 10^11 neurons who communicate with each other, or it may be a lot of data on a hard drive, representing a stored upload.

The moral value of two interconnected structures, in general, does not equal the sum of moral value of each structure (example: whole ... (read more)

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

Just wanted to note that this is too strong a statement. There is no requirement for the 1:1 ratio in "total utilitarianism". You end up with the "repugnant conclusion" to the Parfit's "mere addition" argument as long as this ratio is finite (known as "birth-death asymmetry"). For example, one may argue ... (read more)

3Stuart_Armstrong12y
I was more thinking of a total asymmetry rather that a ratio. But yes, if you have a finite ratio, then you have the repugnant conclusion (even though it's not total utilitarianism unless the ratio is 1:1).
2Lukas_Gloor12y
Exactly! I've been pointing this out too. If you assume preference utilitarianism, then killing counts as wrong, at least if the beings you kill want to stay alive further (or have detailed future plans even). So the replecement only works if you increase the number of the new beings, or make them have more satisfied preferences. The rest of the argument still works, but this is important to point out.

You know, I've felt that examining the dust speck vs torture dilemma or stuff like that, finding a way to derive an intuitively false conclusion from intuitively true premises, and thereby concluding that the conclusion must be true after all (rather than there's some kind of flaw in the proof you can't see yet) is analogous to seeing a proof that 0 equals 1 or that a hamburger is better than eternal happiness or that no feather is dark, not seeing the mistake in the proof straight away, and thereby concluding that the conclusion must be true. Does anyone else feel the same?

Sure.

But it's not like continuing to endorse my intuitions in the absence of any justification for them, on the assumption that all arguments that run counter to my intuitions, however solid they may seem, must be wrong because my intuitions say so, is noticeably more admirable.

When my intuitions point in one direction and my reason points in another, my preference is to endorse neither direction until I've thought through the problem more carefully. What I find often happens is that on careful thought, my whole understanding of the problem tends to alter, after which I may end up rejecting both of those directions.

1private_messaging12y
Well, what you should do, is to recognize that such arguments themselves are built entirely out of intuitions, and their validity rest on conjunction of a significant number of often unstated intuitive assumptions. One should not fall for cargo cult imitation of logic. There's no fundamental reason why value should be linear in number of dust specks; it's nothing but an assumption which may be your personal intuition, but it is still intuition that lacks any justification what so ever, and in so much as it is an uncommon intuition, it even lacks the "if it was wrong it would be debunked" sort of justification. There's always the Dunning Kruger effect. People least capable of moral (or any) reasoning should be expected to think themselves most capable.
2MarkusRamikin12y
Yeah, that has always been my main problem with that scenario. There are different ways to sum multiple sources of something. Consider linear vs paralel electrical circuits; the total output depends greatly on how you count the individual voltage sources (or resistors or whatever). When it comes to suffering, well suffering only exists in consciousness, and each point of consciousness - each mind involved - experiences their own dust speck individually. There is no conscious mind in that scenario who is directly experiencing the totality of the dust specks and suffers accordingly. It is in no way obvious to me that the "right" way to consider the totality of that suffering is to just add it up. Perhaps it is. But unless I missed something, no one arguing for torture so far has actually shown it (as opposed to just assuming it). Suppose we make this about (what starts as) a single person. Suppose that you, yourself, are going to be copied into all that humongous number of copies. And you are given a choice: before that happens, you will be tortured for 50 years. Or you will be unconscious for 50 years, but after copying each of your copies will get a dust speck in the eye. Either way you get copied, that's not part of the choice. After that, whatever your choice, you will be able to continue with your lives. In that case, I don't care about doing the "right" math that will make people call me rational, I care about being the agent who is happily NOT writhing in pain with 50 years more of it ahead of him. EDIT: come to think of it, assume the copying template is taken from you before the 50 years start, so we don't have to consider memories and lasting psychological effects of torture. My answer remains the same, even if in future I won't remember the torture, I don't want to go through it.
0TheOtherDave12y
As far as I know, TvDS doesn't assume that value is linear in dust specks. As you say, there are different ways to sum multiple sources of something. In particular, there are many ways to sum the experiences of multiple individuals. For example, the whole problem evaporates if I decide that people's suffering only matters to the extent that I personally know those people. In fact, much less ridiculous problems also evaporate... e.g., in that case I also prefer that thousands of people suffer so that I and my friends can live lives of ease, as long as the suffering hordes are sufficiently far away. It is not obvious to me that I prefer that second way of thinking, though.
4David_Gerard12y
It is arguable (in terms of revealed preferences) that first-worlders typically do prefer that. This requires a slightly non-normative meaning of "prefer", but a very useful one.
3TheOtherDave12y
Oh, absolutely. I chose the example with that in mind. I merely assert that "but that leads to thousands of people suffering!" is not a ridiculous moral problem for people (like me) who reveal such preferences to consider, and it's not obvious that a model that causes the problem to evaporate is one that I endorse.
0private_messaging12y
Well, it sure uses linear intuition. 3^^^3 is bigger than number of distinct states, its far past point where you are only increasing exactly-duplicated dust speck experience, so you could reasonably expect it to flat out. One can go perverse and proclaims that one treats duplicates the same, but then if there's a button which you press to replace everyone's mind with mind of happiest person, you should press it. I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition. Simulation of pinprick slowed down 1000000 times is not ultra long torture. The 'murder' is a form of irreversible state transition. The morality as it exist is about state transitions not about states.
0TheOtherDave12y
It isn't clear to me what the phrase "exactly-duplicated" is doing there. Is there a reason to believe that each individual dust-speck-in-eye event is exactly like every other? And if so, what difference does that make? (Relatedly, is there a reason to believe that each individual moment of torture is different from all the others? If it turns out that it's not, does that imply something relevant?) In any case, I certainly agree that one could reasonably expect the negvalue of suffering to flatten out no matter how much of it there is. It seems unlikely to me that fifty years of torture is anywhere near the asymptote of that curve, though... for example, I would rather be tortured for fifty years than be tortured for seventy years. But even if it somehow is at the asymptotic limit, we could recast the problem with ten years of torture instead, or five years, or five months, or some other value that is no longer at that limit, and the same questions would arise. So, no, I don't think the TvDS problem depends on intuitions about the linear-additive nature of suffering. (Indeed, the more i think about it the less convinced i am that I have such intuitions, as opposed to approaches-a-limit intuitions. This is perhaps because thinking about it has changed my intuitions.)
-6private_messaging12y
-2Mark_Lu12y
"State" doesn't have to mean "frozen state" or something similar, it could mean "state of the world/universe". E.g. "a state of the universe" in which many people are being tortured includes the torture process in it's description. I think this is how it's normally used.
-5private_messaging12y
0TheOtherDave12y
Agreed that all of these sorts of arguments ultimately rest on different intuitions about morality, which sometimes conflict, or seem to conflict. Agreed that value needn't add linearly, and indeed my intuition is that it probably doesn't. It seems clear to me that if I negatively value something happening, I also negatively value it happening more more. That is, for any X I don't want to have happen, it seems I would rather have X happen than have X happen twice. I can't imagine an X where I don't want X to happen and would prefer to have X happen twice than once. (Barring silly examples like "the power switch for the torture device gets flipped".)
0APMason12y
Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?
1Lukas_Gloor12y
That's been done in this paper, secion VI "The Asymptotic Gambit".
0APMason12y
Thank you. I had expected the bottom to drop out of it somehow. EDIT: Although come to think of it I'm not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between "kill 1 person, prevent 1000 mutilations" + "kill 1 person, prevent 1000 mutilations" and "kill 2 people, prevent 2000 mutilations".) Will have to think on it some more.
1wedrifid12y
Nothing, iif that happens to be be what your actual preferences are. If your preferences did not happen to be as you describe but instead you are confused by an inconsistency in your intuitions then you will make incorrect decisions. The challenge is not to construct a utility function such that you can justify it to others in the face of opposition. The challenge is to work out what your actual preferences are and implement them.
2TheOtherDave12y
Ayup. Also, it may be worth saying explicitly that a lot of the difficulty comes in working out a model of my actual preferences that is internally consistent and can be extended to apply to novel situations. If I give up those constraints, it's easier to come up with propositions that seem to model my preferences, because they approximate particular aspects of my preferences well enough that in certain situations I can't tell the difference. And if I don't ever try to make decisions outside of that narrow band of situations, that can be enough to satisfy me.
0Lukas_Gloor12y
[Edited to separate from quote] But doesn't that beg the question? Don't you have to ask a the meta question "what kinds of preferences are reasonable to have?" Why should we shape ethics the way evolution happened to set up our values? That's why I favor hedonistic utiltiarianism that is about actual states of the world that can in itself be bad (--> suffering).
1TheOtherDave12y
Note that markup requires a blank line between your quote and the rest of the topic. It does beg a question: specifically, the question of whether I ought to implement my preferences (or some approximation of them) in the first place. If, for example, my preferences are instead irrelevant to what I ought to do, then time spent working out my preferences is time that could better have been spent doing something else. All of that said, it sounds like you're suggesting that suffering is somehow unrelated to the way evolution set up our values. If that is what you're suggesting, then I'm completely at a loss to understand either your model of what suffering is, or how evolution works.
1Lukas_Gloor12y
The fact that suffering feels awful is about the very thing, and nothing else. There's no valuing required, no being ask itself "should I dislike this experience" when it is in suffering. It wouldn't be suffering otherwise. My position implies that in a world without suffering (or happiness, if I were not a negative utiltiarian), nothing would matter.
1TheOtherDave12y
Depends on what I'm trying to do. If I make that assumption, then it follows that given enough Torture to approach its limit, I choose any number of Dust Specks rather than that amount of Torture. If my goal is to come up with an algorithm that leads to that choice, then I've succeeded. (I think talking about Torture and Dust Specks as terminal values is silly, but it isn't necessary for what I think you're trying to get at.)
3MBlume12y
Nope! Some proofs are better-supported than others.
2Richard_Kennaway12y
Yes. The known unreliability of my own thought processes tempers my confidence in any prima facie absurd conclusion I come to. All the more so when it's a conclusion I didn't come to, but merely followed along with someone else's argument to.
2private_messaging12y
I feel this way. The linear theories are usually nothing but first order approximations. Also, the very idea of summing of individual agent utilities... that's, frankly, nothing but pseudomathematics. Each agent's utility function can be modified without changing agent's behaviour in any way. The utility function is a phantom. It isn't so defined that you could add two of them together. You can map same agent's preferences (whenever they are well ordered) to infinite variety of real valued 'utility functions'.

Yes. The trouble with "shut up and multiply" - beyond assuming that humans have a utility function at all - is assuming that utility works like conventional arithmetic and that you can in fact multiply.

There's also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes "scope insensitivity." The error is assuming this means that people are scope-insensitive, rather than to realise that people aren't buying saved birds at all, but are paying what they're willing to pay for warm fuzzies in general - a constant amount.

The attraction of utilitarianism is that calculating actions would be so much simpler if utility functions existed and their output could be added with the same sort of rules as conventional arithmetic. This does not, however, constitute non-negligible evidence that any of the required assumptions hold.

2Richard_Kennaway12y
It even tends to count against it, by the A+B rule. If items are selected by a high enough combined score on two criteria A and B, then among the selected items, there will tend to be a negative correlation between A and B.
1[anonymous]12y
I don't know who's making that error. Seems like scope insensitivity and purchasing of warm fuzzies are usually discussed together around here. Anyway, if there's an error here then it isn't about utilitarianism vs something else, but about declared vs revealed preference. The people believe that they care about the birds. They don't act as if they cared about the birds. For those who accept deliberative reasoning as an expression of human values it's a failure of decision-making intuitions and it's called scope insensitivity. For those who believe that true preference is revealed through behavior it's a failure of reflection. None of those positions seems inconsistent with utilitarianism. In fact it might be easier to be a total utilitarian if you go all the way and conclude that humans really care only about power and sex. Just give everybody nymphomania and megalomania, prohibit birth control and watch that utility counter go. ;)
0David_Gerard12y
An explanatory reply from the downvoter would be useful. I'd like to think I could learn.
-2private_messaging12y
I don't think it's even linearly combinable. Suppose there were 4 copies of me total, pair doing some identical thing, other pair doing 2 different things. The second pair is worth more. When I see someone go linear on morals, that strikes me as evidence of poverty of moral value and/or poverty of mathematical language they have available. Then the consequentialism. The consequences are hard to track - got to model the worlds resulting from uncertain initial state. Really really computationally expensive. Everything is going to use heuristics, even jupiter brains. Well, "willing to pay for warm fuzzies" is a bad way to put it IMO. There's limited amount of money available in the first place, if you care about birds rather than warm fuzzies that doesn't make you a billionaire.
0A1987dM12y
The figures people would pay to save 2000, 20,000, or 200,000 birds were $80, $78 and $88 respectively, which oughtn't be so much that the utility of money for most WEIRD people would be significantly non-linear. (A much stronger effect IMO could be people taking --possibly subconsiously-- the “2000” or the “20,000” as evidence about the total population of that bird species.)
-2[anonymous]12y
Utilitarians don't have to sum different utility functions. An utilitarian has an utility function that happens to be defined as a sum of intermediate values assigned to each individual. Those intermediate values are also (confusingly) referred to as utility but they don't come from evaluating any of the infinite variety of 'true' utility functions of every individual. They come from evaluating the total utilitarian's model of individual preference satisfaction (or happiness or whatever). Or at least it seems to me that it should be that way. If I see a simple technical problem that doesn't really affect the spirit of the argument then the best thing to do is to fix the problem and move on. If total utilitarianism really is commonly defined as summing every individual's utility function then that is silly but it's a problem of confused terminology and not really a strong argument against utilitarianism.
3David_Gerard12y
But the spirit of the argument is ungrounded in anything. What evidence is there that you can do this stuff at all using actual numbers without repeatedly bumping into "don't do non-normative things even if you got that answer from a shut-up-and-multiply"?
-2private_messaging12y
Well and then you can have model where the model of individual is sad when the real individual is happy and vice versa, and there would be no problem with that. You got to ground the symbols somewhere. The model has to be defined to approximate reality for it to make sense, and for the model to approximate reality it has to somehow process individual's internal state.
-2David_Gerard12y
Yes. The error is that humans aren't good at utilitarianism. private_messaging has given an example elsewhere: the trouble with utilitarians is that they think they are utilitarians. They then use numbers to convince themselves to do something they would otherwise consider evil. The Soviet Union was an attempt to build a Friendly government based on utilitarianism. They quickly reached "shoot someone versus dust specks" and went for shooting people. They weren't that good at lesser utilitarian decisions either, tending to ignore how humans actually behaved in favour of taking their theories and shutting-up-and-multiplying. Then when that didn't work, they did it harder. I'm sure someone objecting to the Soviet Union example as non-negligible evidence can come up with examples that worked out much better, of course.
4CarlShulman12y
See Eliezer's Ethical Injunctions post. Also Bryan Caplan:
7David_Gerard12y
As I have noted, when you've repeatedly emphasised "shut up and multiply", tacking "btw don't do anything weird" on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to. If arithmetical utilitarianism works so well, it would work in weird territory. Caplan does have a cultural point on the Soviet Union example. OTOH, it does seem a bit "no true utilitarian".

If arithmetical utilitarianism works so well, it would work in weird territory.

Note the bank robbery thread below. Someone claims that "the utilitarian math" shows that robbing banks and donating to charity would have the best consequences. But they don't do any math or look up basic statistics to do a Fermi calculation. A few minutes of effort shows that bank robbery actually pays much worse than working as a bank teller over the course of a career (including jail time, etc).

In Giving What We Can there are several people who donate half their income (or all income above a Western middle class standard of living) to highly efficient charities helping people in the developing world. They expect to donate millions of dollars over their careers, and to have large effects on others through their examples and reputations, both as individuals and via their impact on organizations like Giving What We Can. They do try to actually work things out, and basic calculations easily show that running around stealing organs or robbing banks would have terrible consequences, thanks to strong empirical regularities:

  1. Crime mostly doesn't pay. Bank robbers, drug dealers, and the like ma

... (read more)
1cousin_it12y
I'm confused. Your comment paints a picture of a super-efficient police force that infiltrates criminal groups long before they act. But the Internet seems to say that many gangs in the US operate openly for years, control whole neighborhoods, and have their own Wikipedia pages...
7Paul Crowley12y
The gangs do well, and the rare criminals who become successful gang leaders may sometimes do well, but does the average gangster do well?
5CarlShulman12y
* Gang membership still doesn't pay relative to regular jobs * The police largely know who is in the gangs, and can crack down if this becomes a higher priority * Terrorism is such a priority, to a degree way out of line with the average historical damage, because of 9/11; many have critiqued the diversion of law enforcement resources to terrorism * Such levels of gang control are concentrated in poor areas with less police funding, and areas where the police are estranged from the populace, limiting police activity. * Gang violence is heavily directed at other criminal gangs, reducing the enthusiasm of law enforcement, relative to more photogenic victims
0Decius12y
The other side is that robbing banks at gunpoint isn't the most effective way to redistribute wealth from those who have it to those to whom it should go. I suspect that the most efficient way to do that is government seizure- declare that the privately held assets of the bank now belong to the charities. That doesn't work, because the money isn't value, it's a signifier of value, and rewriting the map does not change the territory- if money is forcibly redistributed too much, it loses too much value and the only way to enforce the tax collection is by using the threat of prison and execution- but the jailors and executioners can only be paid by the taxes. Effectively robbing banks to give the money to charity harms everyone significantly, and fails to be better than doing nothing.
2wedrifid12y
It may have been better if CarlShulman used a different word - perhaps 'Evil' - to represent the 'ethical injunctions' idea. That seems to better represent the whole "deliberately subvert consequentialist reasoning in certain areas due to acknowledgement of corrupted and bounded hardware". 'Weird' seems to be exactly the sort of thing Eliezer might advocate. For example "make yourself into a corpsicle" and "donate to SingInst".
1David_Gerard12y
But, of course, "weird" versus "evil" is not even broadly agreed upon. And "weird" includes many things Eliezer advocates, but I would be very surprised if it did not include things that Eliezer most certainly would not advocate.
1wedrifid12y
Of course it does. For example: dressing up as a penguin and beating people to death with a live fish. But that's largely irrelevant. Rejecting 'weird' as the class of things that must never be done is not the same thing as saying that all things in that class must be done. Instead, weirdness is just ignored.
-4Dolores198412y
I've always felt that post was very suspect. Because, if you do the utilitarian math, robbing banks and giving them to charity is still a good deal, even if there's a very low chance of it working. Your own welfare simply doesn't play a factor, given the size of the variables you're playing with. It seems to be that there is a deeper moral reason not to murder organ donors or steal food for the hungry than 'it might end poorly for you.'

Because, if you do the utilitarian math, robbing banks and giving them to charity is still a good deal

Bank robbery is actually unprofitable. Even setting aside reputation (personal and for one's ethos), "what if others reasoned similarly," the negative consequences of the robbery, and so forth you'd generate more expected income working an honest job. This isn't a coincidence. Bank robbery hurts banks, insurers, and ultimately bank customers, and so they are willing to pay to make it unprofitable.

According to a study by British researchers Barry Reilly, Neil Rickman and Robert Witt written up in this month's issue of the journal Significance, the average take from a U.S. bank robbery is $4,330. To put that in perspective, PayScale.com says bank tellers can earn as much as $28,205 annually. So, a bank robber would have to knock over more than six banks, facing increasing risk with each robbery, in a year to match the salary of the tellers he's holding up.

-1Dolores198412y
That was a somewhat lazy example, I admit, but consider the most inconvenient possible world. Let's say you could expect to take a great deal more from a bank robbery. Would it then be valid utilitarian ethics to rob (indirectly) from the rich (us) to give to the poor?
9CarlShulman12y
My whole point in the comments on this post has been that it's a pernicious practice to use such false examples. They leave erroneous impressions and associations. A world where bank-robbery is super-profitable, so profitable as to outweigh the effects of reputation and the like, is not very coherent. A better example would be something like: "would utilitarians support raising taxes to fund malaria eradication," or "would a utilitarian who somehow inherited swoopo.com (a dollar auction site) shut down the site or use the revenue to save kids from malaria" or "if a utilitarian inherited the throne in a monarchy like Oman (without the consent of the people) would he spend tax revenues on international good causes or return them to the taxpayers?"
9MarkusRamikin12y
Only if you're bad at math. Banks aren't just piggybanks to smash, they perform a useful function in the economy, and to disrupt it has consequences. Of course I prefer to defeat bad utilitarian math with better utilitarian math rather than with ethical injunctions. But hey, that's the woe of bounded reason, even without going into the whole corrupted hardware problem: your model is only so good, and heuristics that serve as warning signals have their place.
1Lukas_Gloor12y
Why would that be an error? It's not a requirement for an ethical theory that Homo sapiens must be good at it. If we notice that humans are bad at it, maybe we should make AI or posthumans that are better at it, if we truly view this as the best ethical theory. Besides, if the outcome of people following utilitarianism is really that bad, then utilitarianism would demand (it gets meta now) that people should follow some other theory that overall has better outcomes (see also Parfit's Reasons and Persons). Another solution is Hare's proposed "Two-Level Utilitarianism". From Wikipedia:
2David_Gerard12y
The error is that it's humans who are attempting to implement the utilitarianism. I'm not talking about hypothetical non-human intelligences, and I don't think they were implied in the context.
2fubarobfusco12y
See also Ends Don't Justify Means (Among Humans): having non-consequentialist rules (e.g. "Thou shalt not murder, even if it seems like a good idea") can be consequentially desirable since we're not capable of being ideal consequentialists.
9David_Gerard12y
Oh, indeed. But when you've repeatedly emphasised "shut up and multiply", tacking "btw don't do anything weird" on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.
1private_messaging12y
I don't think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it's magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).
-1private_messaging12y
Well, those examples would have a lot of "okay we can't calculate utility here, so we'll use a principle" and far less faith in direct utilitarianism. With the torture and dust specks, see, it arrives at counter intuitive conclusion, but it is not proof grade reasoning by any means. Who knows, maybe the correct algorithm for evaluation of torture vs dust specks must have BusyBeaver(10) for the torture, and BusyBeaver(9) for dust specks, or something equally outrageously huge (after all, thought, which is being screwed with by torture, is turing-complete). The 3^^^3 is not a very big number. There are numbers which are big like you wouldn't believe. edit: also I think even vastly superhuman entities wouldn't be very good at consequence evaluation, especially from uncertain start state. In any case, some sorta morality oracle would have to be able to, at very least, take in full specs of human brain and then spit out the understanding of how to trade off the extreme pain of that individual, for dust speck of that individual (at task which may well end up in ultra long computations BusyBeaver(1E10) style. Forget the puny up arrow). That's an enormously huge problem which the torture-choosers obviously a: haven't done and b: didn't even comprehend that something like this would be needed. Which brings us to the final point: the utilitarians are the people whom haven't slightest clue what it might take to make an utilitarian decision, but are unaware of that deficiency. edit: and also, I would likely take 1/3^^^3 chance of torture over a dust speck. Why? Because dust speck may result in an accident leading up to decades of torturous existence. Dust speck's own value is still non comparable, it only bothers me because it creates the risk. edit: note, the busy beaver reference is just an example. Before you can be additively operating on dust specks and pain, and start doing some utilitarian math there, you have to at least understand how the hell is it that an algorith
5A1987dM12y
IIRC, in the original torture vs specks post EY specified that none of the dust specks would have any long-term consequence.
0private_messaging12y
I know. Just wanted to point out where the personal preference (easily demonstrable when people e.g. neglect to take inconvenient safety measures) of small chance of torture vs definite dust speck comes from.

Another problem with the repugnant conclusion is economic: it assumes that the cost of creating and maintaining additional barely-worth-living people is negligibly small.

The problem here seems to be about the theories not taking all things we value into account. It's therefore less certain whether their functions actually match our morals. If you calculate utility using only some of your utility values, you're not going to get the correct result. If you're trying to sum the set {1,2,3,4} but you only use 1, 2 and 4 in the calculation, you're going to get the wrong answer. Outside of special cases like "multiply each item by zero" it doesn't matter whether you add, subtract or divide, the answer will still be wron... (read more)

It's a good and thoughtful post.

Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game - anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined "more equal" society?

I wonder if it makes sense to model a separate variable in the global utility function f... (read more)

4Nornagest12y
I'll accept the intuition, but culture seems even harder to quantify than individual welfare -- and the latter isn't exactly easy. I'm not sure what we should be summing over even in principle to arrive at a function for cultural utility, and I'm definitely not sure if it's separable from individual welfare. One approach might be to treat cultural artifacts as fractions of identity, an encoding of their creators' thoughts waiting to be run on new hardware. Individually they'd probably have to be considered subsapient (it's hard to imagine any transformation that could produce a thinking being when applied to Lord of the Rings), but they do have the unique quality of being transmissible. That seems to imply a complicated value function based partly on population: a populous world containing Lord of the Rings without its author is probably enriched more than one containing a counterfactual J.R.R. Tolkien that never published a word. I'm not convinced that this added value need be positive, either: consider a world containing one of H.P. Lovecraft's imagined pieces of sanity-destroying literature. Or your own least favorite piece of real-life media, if you're feeling cheeky.
3Lukas_Gloor12y
How about a universe with one planet full of inaminate cultural artifacts of "great artistic value", and, on another planet that's forever unreachable, a few creatures in extreme suffering? If you make the cultural value on the artifact planet high enough, it would seem to justify the suffering on the other planet, and you'd then have to prefer this to an empty universe, or one with insentient plankton. But isn't that absurd? Why should creatures suffer lives not worth living just because somewhere far away are rocks with fancy symbols on it?
0Mass_Driver12y
Because I like rocks with fancy symbols on them? I'm uncertain about this; maybe sentient experiences are so sacred that they should be lexically privileged over other things that are desirable or undesirable about a Universe. But, basically, I don't have any good reason to prefer that you be happy vs. unhappy -- I just note that I reliably get happy when I see happy humans and/or lizards and/or begonias and/or androids, and I reliably get unhappy when I see unhappy things, so I prefer to fill Universes with happy things, all else being equal. Similarly, I feel happy when I see intricate and beautiful works of culture, and unhappy when I read Twilight. It feels like the same kind of happy as the kind of happy I get from seeing happy people. In both cases, all else being equal, I want to add more of it to the Universe. Am I missing something? What's the weakest part of this argument?
3TheOtherDave12y
So, now I'm curious... if tomorrow you discovered some new thing X you'd never previously experienced, and it turned out that seeing X made you feel happier than anything else (including seeing happy things and intricate works of culture), would you immediately prefer to fill Universes with X?
0Mass_Driver12y
I should clarify that by "fill" I don't mean "tile." I'm operating from the point of view where my species' preferences, let alone my preferences, fill less than 1 part in 100,000 of the resource-rich volume of known space, let alone theoretically available space. if that ever changed, I'd have to think carefully about what things were worth doing on a galactic scale. It's like the difference between decorating your bedroom and laying out the city streets for downtown -- if you like puce, that's a good enough reason to paint your bedroom puce, but you should probably think carefully before you go influencing large or public areas. I would also wonder if some new thing made me incredibly happy if perhaps it was designed to do that by someone or something that isn't very friendly toward me. I would suspect a trap. I'd want to take appropriate precautions to rule out that possibility. With those two disclaimers, though, yes. If I discovered fnord tomorrow and fnord made me indescribably happy, then I'd suddenly want to put a few billion fnords in the Sirius Sector.
2Lukas_Gloor12y
Do you think the preferences of your species matter more than preferences of some other species, e.g. intelligent aliens? I think that couldn't be justified. I'm currently working on a LW article about that.
0Mass_Driver12y
I haven't thought much about it! I look forward to reading your article. My point above was simply that even if my whole species acted like me, there would still be plenty of room left in the Universe for a diversity of goods. Barring a truly epic FOOM, the things humans do in the near future aren't going to directly starve other civilizations out of a chance to get the things they want. That makes me feel better about going after the things I want.
2TheOtherDave12y
(nods) Makes sense. If I offered to, and had the ability to, alter your brain so that something that already existed in vast quantities -- say, hydrogen atoms -- made you indescribably happy, and you had taken appropriate precautions to rule out the possibility that I wasn't very friendly towards you and that this wasn't a trap, would you agree?
1Mass_Driver12y
Sure! That sounds great. Thank you. :-)
1Lukas_Gloor12y
I think it's a category error to see ethics as only being about what one likes (even if that involves some work getting rid of obvious contradictions). In such a case, doing ethics would just be descriptive, it would tell us nothing new, and the outcome would be whatever evolution arbitrarily equipped us with. Surely that's not satisfying! If evolution had equipped us with a strong preference to generate paperclips, should our ethicists then be debating how to best fill the universe with paperclips? Much rather, we should be trying to come up with better reasons than mere intuitions and arbitrarily (by blind evolution) shaped preferences. If there was no suffering and no happiness, I might agree with ethics just being about whatever you like, and I'd add that one might as well change what one likes and do whatever, since nothing then truly mattered. But it's a fact that suffering is intrinsically awful, in the only way something can be, for some first person point of view. Of pain, one can only want one thing: That it stops. I know this about my pain as certainly as I know anything. And just because some other being's pain is at another spatio-temporal location doesn't change that. If I have to find good reasons for the things I want to do in life, there's nothing that makes even remotely as much sense as trying to minimize suffering. Especially if you add that caring about my future suffering might not be more rational than caring about all future suffering, as some views on personal identity imply.
2Mass_Driver12y
I used to worry about that a lot, and then AndrewCritch explained at minicamp that the statement "I should do X" can mean "I want to want to do X." In other words, I currently prefer to eat industrially raised chicken sometimes. It is a cold hard fact that I will frequently go to a restaurant that primarily serves torture-products, give them some money so that they can torture some more chickens, and then put the dead tortured chicken in my mouth. I wish I didn't prefer to do that. I want to eat Subway footlongs, but I shouldn't eat Subway footlongs. I aspire not to want to eat them in the future. Also check out the Sequences article "Thou Art Godshatter." Basically, we want any number of things that have only the most tenuous ties to evolutionary drives. Evolution may have equipped me with an interest in breasts, but it surely is indifferent to whether the lace on a girlfriend's bra is dyed aquamarine and woven into a series of cardioids or dyed magenta and woven into a series of sinusoidal spirals -- whereas I have a distinct preference. Eliezer explains it better than I do. I'm not sure "intriniscally awful" means anything interesting. I mean, if you define suffering as an experience E had by person P such that P finds E awful, then, sure, suffering is intrinsically awful. But if you don't define suffering that way, then there are at least some beings that won't find a given E awful.
2TheOtherDave12y
(shrug) I agree that suffering is bad. It doesn't follow that the only thing that matters is reducing suffering.
2Lukas_Gloor12y
But suffering is bad no matter your basic preference architecture. It takes the arbitrariness of out ethics when it's applicable to all that. Suffering is bad (for the first person point of view experiencing it) in all hypothetical universes. Well, by definition. Culture isn't. Biological complexity isn't. Biodiversity isn't. Even if it's not all that matters, it's a good place to start. And a good way to see whether something else really matters too is to look whether you'd be willing to trade a huge amount of suffering for whatever else you consider to matter, all else being equal (as I did in the example about the planet full of artifacts).
4TheOtherDave12y
Yes, basically everyone agrees that suffering is bad, and reducing suffering is valuable. Agreed. And as you say, for most people there are things that they'd accept an increase in suffering for, which suggests that there are also other valuable things in the world. The idea of using suffering-reduction as a commensurable common currency for all other values is an intriguing one, though.

For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations

What is happiness? If happiness is the "utility" that people maximise (is it?), and the richer are only slightly happier than the poorer (cite?), why is it that when people have the opportunity to vote with their feet, people in poor nations flock to richer nations whenever they can, and do not want to return?

1Stuart_Armstrong12y
There's a variety of good literature on the subject (one key component is that people are abysmally bad at estimating their future levels of happiness). There are always uncertainties in defining happiness (as with anything), but there's a clear consensus that whatever is making people move countries, actual happiness levels is not it. (now, expected happiness levels might be it; or, more simply, that people want a lot of things, and that happiness is just one of them)

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so.

I dare to say that no self-professed "total utilitarian" actually aliefs this.

5Lukas_Gloor12y
I know total utilitarians who'd have no problem with that. Imagine simulated minds instead of carbon-based ones. If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I'm not a total utilitarian, but I don't think that's a particularly problematic aspect of it. My problem with total hedonistic utiltiarianism is the following: Imagine a planet full of beings living in terrible suffering. You have the choice to either euthanize them all (or just make them happy), or let them go on living forever, while also creating a sufficiently huge number of beings with lives barely worth living somewhere else. Now that I find unacceptable. I don't think you do anything good by bringing a happy being into existence.
3Dolores198412y
As someone who plans on uploading eventually, if the technology comes around... no. Still feels like murder.
1Will_Sawin12y
This is problematic. If bringing a happy being into existence doesn't do anything good, and bringing a neutral being into existence doesn't do anything bad, what do you do when you switch a planned neutral being for a planned happy being? For instance, you set aside some money to fund your unborn child's education at the College of Actually Useful Skills.
1Lukas_Gloor12y
Good catch, I'm well aware of that. I didn't say that I think bringing a neutral being into existence is neutral. If the neutral being's life contains suffering, then the suffering counts negatively. Prior-existence views seem to not work without the inconsistency you pointed out. The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism. Which has its own repugnant conclusions (e.g. anti-natalism), but for several reasons I find those easier to accept.
0Stuart_Armstrong12y
As I said, any preferences that can be cast into utility function form are consistent. You seem to be adding extra requirements for this "consistency".
2Lukas_Gloor12y
I should qualify my statement. I was talking only about the common varieties of utilitarianism and I may well have omitted consistent variants that are unpopular or weird (e.g. something like negative average preference-utilitarianism). Basically my point was that "hybrid-views" like prior-existence (or "critical level" negative utiltiarianism) run into contradictions. Most forms of average utilitarianism aren't contradictory, but they imply an obvious absurdity: A world with one being in maximum suffering would be [edit:] worse than a world with a billion beings in suffering that's just slightly less awful.
1APMason12y
That last sentence didn't make sense to me when I first looked at this. Think you must mean "worse", not "better".
0Lukas_Gloor12y
Indeed, thanks.
0Stuart_Armstrong12y
I'm still vague on what you mean by "contradictions".
1Lukas_Gloor12y
Not in the formal sense. I meant for instance what Will_Savin pointed out above, a neutral life (a lot of suffering and a lot of happiness) being equally worthy of creating as a happy one (mainly just happiness, very little suffering). Or for "critical levels" (which also refers to the infamous dust specks), see section VI of this paper, where you get different results depending on how you start aggregating. And Peter Singer's prior-existence view seems to contain a "contradiction" (maybe "absurdity" is better) as well having to do with replaceability, but that would take me a while to explain. It's not quite a contradiction that the theory states "do X and not-X", but it's obvious enough that something doesn't add up. I hope that led to some clarification, sorry for my terminology.
0Will_Sawin12y
Ah, I see. Anti-natalism is certainly consistent, though I find it even more repugnant.
-4jefftk12y
Assuming perfection in the methods, ending N lives and replacing them with N+1 equally happy lives doesn't bother me. Death isn't positive or negative except in as much as it removes the chance of future joy/suffering by the one killed and saddens those left behind. With physical humans you won't have perfect methods and any attempt to apply this will end in tragedy. But with AIs (emulated brains or fully artificial) it might well apply.

A more general problem with utilitarianisms including those that evade the critique in that article:

Suppose we have a computer running a brain sim (along with VR environment). The brain sim works as following: given current state, next state is calculated (using multiple cpus in parallel); the current state is read only, the next state is write only. Think arrays of synaptic values. After all of the next state is calculated, the arrays are switched and the old state data is written over . This is a reductionist model of 'living' that is rather easy to thin... (read more)

4torekp12y
Why assume that utility is a function of individual states in this model, rather than processes? Can't a utilitarian deny that instantaneous states, considered apart from context, have any utility?
2private_messaging12y
What is "processes" ? What's about not switching state data in above example? (You keep re-calculating same state from previous state; if it's calculation of the next state that is the process then the process is all right) Also, at that point you aren't rescuing utilitarianism, you're going to some sort of virtue ethics where particular changes are virtuous on their own. Bottom line is, if you don't define what is processes then you just plug in something undefined through which our intuitions can pour in and make it look all right even if the concept is still fundamentally flawed. We want to overwrite the old state with the new state. But we would like to preserve old state in a backup if we had unlimited memory. It thus follows that there is a tradeoff decision between worth of old state, worth of new state, and cost of backup. You can proclaim that instantaneous states considered apart from context don't have any utility. Okay you have what ever context you want, now what are the utilities of the states and the backup, so that we can decide when to do the backup? How often to do the backup? Decide on optimal clock rate? etc.
1torekp12y
A process, at a minimum, takes some time (dt > 0). Calculating the next state from previous state would be a process. If you make backups, you could also make additional calculation processes working from those backed-up states. Does that count as "creating more people"? That's a disputed philosophy of mind question on which reasonable utilitarians might differ, just like anyone else. But if they do say that it creates more people, then we just have yet another weird population ethics question. No more and no less a problem for utilitarianism than the standard population ethics questions, as far as I can see. Nothing follows about each individual's life having to have ever-increasing utility lest putting that person in stasis be considered better.
3private_messaging12y
I actually would be very curious as of any ideas how 'utilitarianism' could be rescued from this. Any ideas? I don't believe direct utilitarianism works as a foundation; declaring that the intelligence is about maximizing 'utility' just trades one thing (intelligence) that has not been reduced to elementary operations but we at least have good reasons to believe it should be reducible (we are intelligent and laws of physics are, in relevant approximation, computable), for something ("utility") that not only hasn't been shown reducible but for which we have no good reason to think it is reducible or works on reductionist models (observe how there's suddenly a problem with utility of life once I consider a mind upload simulated in a very straightforward way; observe how number of paperclips in the universe is impossible or incredibly difficult to define as a mathematical function). edit: Note: the model-based utility-based agent does not have real world utility function and as such, no matter how awesomely powerful is the solver it uses to find maximums of mathematical functions, won't ever care if it's output gets disconnected from the actuators, unless such condition was explicitly included into model; furthermore it will break itself if model includes itself and it is to modify the model, once again no matter how powerful is it's solver. The utility is defined within very specific non-reductionist model where e.g. a paperclip is a high level object, and 'improving' model (e.g. finding out that paperclip is in fact made of atoms) breaks utility measurement (it was never defined how to recognize when those atoms/quarks/what ever novel physics the intelligence came up with, constitute a paperclip). This is not a deficiency when it comes to solving practical problems other than 'how do we destroy mankind by accident'.

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks.

Keep in mind that the people being brought into existence will be equally real people, with dreams, aspirations, grudges, and a... (read more)

An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc... may have some validity, total utilitarianism wins as the population increases because the others don't scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.

I'll admit I haven't encountered this argument before, but to me it looks like a type... (read more)

I like that article. I wrote some on other problem with utilitarianism.

Also, by the way, regarding the use of name of Bayes, you really should thoroughly understand this paper and also get some practice solving belief propagation approximately on not so small networks full of loops and cycles (or any roughly isomorphic problem), to form opinion on self described Bayesianists.

And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular I

... (read more)

This one left me wondering - is "population ethics" any different from "politics"?

0Danfly12y
Interesting point, but I would say there are areas of politics that don't really come under "ethics". "What is currently the largest political party in the USA?" is a question about politics and demographics, but I wouldn't call it a question of population ethics. I'd say that you could probably put anything from the subset of "population ethics" into the broad umbrella of "politics" though.

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

Other members of society typically fail to approve of murder, and would apply sanctions to the utilitarian - probably hindering them in their pursuit of total utility. So, in practice, a human being pursuing total utilitarianism would simply not act in this way.

Good article! Here are a few related questions:

  1. The problem of comparing different people's utility functions applies to average utilitarianism as well, doesn't it? For instance if your utility function is U and my utility function is V, then the average could be (U + V)/2 : however utility functions can be rescaled by any linear function, so let's make mine 1000000 x V. Now the average is U/2 + 500000 x V, which seems totally fair doesn't it? Is the right solution here to assume that each person's utility has a "best possible" case, and a "

... (read more)

I think I agree with your conclusion. But this:

to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich).

requires you to assume that the US or "the rich" have no relevant chance of producing vastly happier people in the future. This seems stronger than denying the singularity as such. And it makes targeted killing feel much more attractive to this misanthrope.

Only a slightly relevant question which nevertheless I haven't yet seen addressed: If a utilitarian desires to maximise other people's utilities and the other people are utilitarians themselves, also deriving their utility from the utilities of others (the original utilitarian included), doesn't that make utilitarianism impossible to define? The consensus seems to be that one can't take one's own mental states for argument of one's own utility function. But utilitarians rarely object to plugging others' mental states into their utility functions, so the danger of circularity isn't avoided. Is there some clever solution to this?

2novalis12y
No, because a utilitarianism does not specify a utilitarian's desires; it specifies what they consider moral. There are lots of things we desire to do that aren't moral, and that we choose not to do because they are not moral.
2prase12y
I believe this doesn't answer my question; I will reformulate the problem in order to remove potentially problematic words and make it more specific: Let the world contain at least two persons, P1 and P2 with utility functions U1 and U2. Both are traditional utilitarians: they value happiness of the others. Assume that U1 is a sum of two terms: H2 + u1(X), where H2 is some measure of happiness of P2 and u1(X) represents P1's utility unrelated to P2's happiness, X is the state of the rest of the world; similarly U2 = H1 + u2(X). (H1 and H2 are monotonous functions of happiness but not necessarily linear - whatever it would even mean - so having U as linear function of H is still quite general.) Also, as for most people, the happiness of the model utilitarians is correlated with their utility. Let's again assume that the utilities decompose into sums of independent terms such that H1 = h1(U1) + w1(X), where w contains all non-utility sources of happiness and h1(.) is a growing function; similarly for the second agent. So we have: * U1 = h2(U2) + w2(X) + u1(X) * U2 = h1(U1) + w1(X) + u2(X) Whether this does or doesn't have solution (for U1 and U2) depends on details of h1, h2, u1, u2, w1, w2 and X. But what I say is that the system of equations is a direct analogue of the forbidden * U = h(U) + u(X) i.e. when one's utility function takes itself for an argument.
3endoself12y
This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.
1prase12y
Yes, I may not know the exact value of my utility since I don't know the value of every argument it takes, and yes, there are consequently changes in utility which aren't accompanied with corresponding changes in happiness, but no, this doesn't mean that utility and happiness aren't correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn't the case. Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent's utilities.
0mwengler12y
There may be many people who's utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.
0endoself12y
I'm not sure exactly why prase disagrees with me - I can think of many mutually exclusive reasons that it would take a while to write out individually - but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?
0novalis12y
Here's another way to look at it: Imagine that everyone starts at time t1 with some level of utility, U[n]. Now, they generate a utility based on their beliefs about the sum of everyone else's utility (at time t1). Then they update by adding some function of that summed (averaged, whatever) utility to their own happiness. Let's assume that function is some variant of the sigmoid function. This is actually probably not too far off from reality. Now we know that the maximum happiness (from the utility of others) that a person can have is one (and the minimum is negative one). And assuming that most people's base level of happiness is somewhat larger than the effect of utility, this is going to be a reasonably stable system. This is a much more reasonable model, since we live in a time-varying world, and our beliefs about that world change over time as we gain more information.
1prase12y
When information propagates fast relative to the rate of change of external conditions, the dynamic model converges to the stable point which would be the solution of the static model - are the models really different in any important aspect? Instability is indeed eliminated by use of sigmoid functions, but then the utility gained from happiness (of others) is bounded. Bounded utility functions solve many problems, the "repugnant conclusion" of the OP included, but some prominent LWers object to their use, pointing out scope insensitivity. (I have personally no problems with bounded utilities.)
0novalis12y
Utility functions need not be bounded, so long as their contribution to happiness is bounded.
0mwengler12y
I think you are on to something brilliant here. The thing that is new to me in your question is the recursive aspect of utilitarianism. A theory of morality that says the moral thing to to do is to maximize utility, clearly then maximizing utility is a thing that has utility. From here in an engineering sense, you'd have at least two different places you could go. A sort of naive place to go would be to try to have each person maximize total utility independently of what others are doing, noting that other people's utility summed up is much larger than one's own utility. Then to a very large extent your behavior will be driven by maximizing other people's utility. In a naive design involving say 100 utilitarians, one would be "over-driving" the system by ~100 x, if each utilitarian was separately calculating everybody else's utility and trying to maximize it. In some sense, it would be like a feedback system with way too much gain: 99 people all trying to maximize your utility. An alternative place to go would be to say utility is a meta-ethical consideration, that an ethical system should have the property that it maximizes total utility. But then from engineering considerations you would expect 1) you would have lots of different rule systems that would come close to maximizing utility and 2) among the simplest and most effective would be to have each agent maximizing its own utility under the constraint of rules which were designed to get rid of anti-synergistic effects and to enhance synergistic effects. So you would expect contract law, anti-fraud law, laws against bad externalities, laws requiring participation in good externalities. But in terms of "feedback," each agent in the system would be actively adjusting to maximize its own utility within the constraints of the rules. This might be called rule-utilitarianism, but really I think it is a hybrid of rule utilitarianism and justified selfishness (Rand's egoism? Economics "homo economicus" rational utili

Why then is it so popular? Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers.

Surely that is not the reason. Firstly, utilitarianism is not that popular. My theory about why it has any adherents at all is that it is used for signalling purposes. One use of moral systems is to broadcast what a nice person you are. Utilitarianism is a super-unselfish moral system. So, those looking for a niceness superstimulus are attracted. I think this pretty neatly explains the 'utilitarianism' demographics.

-4Ghatanathoah12y
I don't know if you mean to come across this way, but the way you have written this makes it sound like you think utilitarians are cynically pretending to believe in utilitarianism to look good to others, but don't really believe it in their heart of hearts. I don't think this is true in most cases, I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia. If you want a plausabile theory as to how natural selection could produce sincere altruism, look at it from a game-theoretic perspective. People who could plausibly signal altruism and trustworthiness would get huge evolutionary gains because they could attract trading partners more easily. One of the more effective ways to signal that you possess a trait is to actually possess it. One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy. So it's plausible that humans evolved to be genuinely nice, trustworthy, and altruistic, probably because the evolutionary gains from getting trade partners to trust them outweighed the evolutionary losses from sacrificing for others. Akrasia can be seen as an evolved mechanism that sabotages our altruism in an ego-dystonic way, so that we can truthfully say we're altruists without making maladaptive sacrifices for others. Of course, the fact that our altruistic tendencies may have evolved from genetically selfish reasons gives us zero reason to behave in a selfish fashion today, except possibly as a means to prevent natural selection from removing altruism from existence. We are not our genes.
3CarlShulman12y
If all you mean by "sincere" is not explicitly thinking of something as deceptive, that seems right to me, but if "sincere" is supposed to mean "thoughts and actions can be well-predicted by utilitarianism" I disagree. Utilitarian arguments get selectively invoked and special exceptions made in response to typical moral sentiments, political alignments, personal and tribal loyalties, and so forth. I would say similar things about religious accounts of morality. Many people claim to buy Christian or Muslim or Buddhist ethics, but the explanatory power coming from these, as opposed to other cultural, local, and personal factors, seems limited.
3Ghatanathoah12y
I was focused more on the first meaning of "sincere." I think that utilitarian's abstract "far mode" ethical beliefs and thoughts are generally fairly well predicted by utilitarianism, but their "near mode" behaviors are not. I think that self-deception and akrasia are the main reasons there is such dissonance between their beliefs and behavior. I think a good analogy is belief in probability theory. I believe that doing probability calculations, and paying attention to the calculations of others, is the best way to determine the likelihood of something. Sometimes my behavior reflects this, I don't buy lottery tickets for instance. But other times it does not. For example, I behave more cautiously when I'm out walking if I have recently read a vivid description of a crime, even if said crime occurred decades ago, or is fictional. I worry more about diseases with creepy symptoms than I do about heart disease. But I think I do sincerely "believe" in probability theory in some sense, even though it doesn't always affect my behavior.
0Strange712y
I agree with the main thrust of the argument, but such signaling would only apply to potential trading partners to whom you make a habit of speaking openly and honestly about your motives, or who are unusually clever and perceptive, or both.
-2timtyler12y
Altruism - at least in biology - normally means taking an inclusive fitness hit for the sake of others - e.g. see the definition of Trivers (1971), which reads: Proposing that altruism benefits the donor just means that you aren't talking about genuine altruism at all, but "fake" altruism - i.e. genetic selfishness going a fancy name. Such "fake" altruism is easy to explain. The puzzle in biology is to do with genuine altruism. So: I am most interested in explaining behaviour. In this case, I think virtue signalling is pretty clearly the best fit. You are talking about conscious motives. These are challenging to investigate experimentally. You can ask people - but self-reporting is notoriously unreliable. Speculations about conscious motives are less interesting to me.
5Ghatanathoah12y
I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism - taking a self-interest hit for the sake of others' self interest. It's quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance. I wasn't proposing that altruism benefited the donor. I was proposing that it benefited the donor's genes. That doesn't mean that it is "fake altruism," however, because self interest and genetic interest are not the same thing. Self interest refers to the things a person cares about and wants to accomplish, i.e. happiness, pleasure, achievement, love, fun, it doesn't have anything to do with genes. Essentially, what you have argued is: 1. Genuinely caring about other people might cause you to behave in ways that make your genes replicate more frequently. 2. Therefore, you don't really care about other people, you care about your genes. If I understand your argument correctly it seems like you are committing some kind of reverse anthropomorphism. Instead of ascribing human goals and feelings to nonsentient objects, you are ascribing the metaphorical evolutionary "goals" of nonsentient objects (genes) to the human mind. That isn't right. Humans don't consciously or unconsciously directly act to increase our IGF, we simply engage in behaviors for their own sake that happened to increase our IGF in the ancestral environment.
2A1987dM12y
Relevant
0timtyler12y
So: I am talking about science, while you are talking about moral philosophy. Now that we have got that out the way, there should be no misunderstanding - though in the rest of your post you seem keen to manufacture one.
0Ghatanathoah12y
I was talking about both. My basic point was that the reason humans evolved to care about morality and moral philosophy in the first place was because doing so made them very trustworthy, which enhanced their IGF by making it easier to obtain allies. My original reply was a request for you to clarify whether you meant that utilitarians are cynically pretending to care about utilitarianism in order to signal niceness, or whether you meant that humans evolved to care about niceness directly and care about utilitarianism because it is exceptionally nice (a "niceness superstimulus" in your words). I wasn't sure which you meant. It's important to make this clear when discussing signalling because otherwise you risk accusing people of being cynical manipulators when you don't really mean to.

A population of TDT agents with different mostly-selfish preferences should end up with actions that closely resemble total utilitarianism for a fixed population, but oppose the adding of people at the subsistence level followed by major redistribution. (Or so it seems to me. And don't ask me what UDT would do.)

I have no interest in defending utilitarianism, but I do have an interest in a total welfare (yes I think such a concept can make sense) of sentient beings. The repugnance of the Repugnant Conclusion, I suggest, is a figment of your lack of imagination. When you imagine a universe with trillions of people whose lives are marginally worth living, you probably imagine people whose lives are a uniform grey, just barely closer to light than darkness. In other words, agonizingly boring lives. But this is unnecessary and prejudicial. Instead, imagine people... (read more)

In Austrian economics using the framework of Praxiology the claim is made that preferences (the rough equivalent of utilities) cannot be mapped to cardinal values but different states of the world are still well ordered by an individual's preferences such that one world state can be said to be more or less desirable than another world state. This makes it impossible to numerically compare the preferences of two individuals except through the pricing/exchange mechanism of economics. E.g. would 1 billion happy people exchange their own death for the existe... (read more)