Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A (small) critique of total utilitarianism

35 Post author: Stuart_Armstrong 26 June 2012 12:36PM

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare). In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks. To avoid causing extra pain to those left behind, it is better that you kill off whole families and communities, so that no one is left to mourn the dead. In fact the most morally compelling act would be to kill off the whole of the human species, and replace it with a slightly larger population.

We have many real world analogues to this thought experiment. For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich). Of course, the rich world also produces most of the farming surplus and the technology innovation, which allow us to support a larger population. So we should aim to kill everyone in the rich world apart from farmers and scientists - and enough support staff to keep these professions running (Carl Shulman correctly points out that we may require most of the rest of the economy as "support staff". Still, it's very likely that we could kill off a significant segment of the population - those with the highest consumption relative to their impact of farming and science - and still "improve" the situation).

Even if turns out to be problematic to implement in practice, a true total utilitarian should be thinking: "I really, really wish there was a way to do targeted killing of many people in the USA, Europe and Japan, large parts of Asia and Latin America and some parts of Africa - it makes me sick to the stomach to think that I can't do that!" Or maybe: "I really really wish I could make everyone much poorer without affecting the size of the economy - I wake up at night with nightmare because these people remain above the poverty line!"

I won't belabour the point. I find those actions personally repellent, and I believe that nearly everyone finds them somewhat repellent or at least did so at some point in their past. This doesn't mean that it's the wrong thing to do - after all, the accepted answer to the torture vs dust speck dilemma feels intuitively wrong, at least the first time. It does mean, however, that there must be very strong countervailing arguments to balance out this initial repulsion (maybe even a mathematical theorem). For without that... how to justify all this killing?

Hence for the rest of this post, I'll be arguing that total utilitarianism is built on a foundation of dust, and thus provides no reason to go against your initial intuitive judgement in these problems. The points will be:

  1. Bayesianism and the fact that you should follow a utility function in no way compel you towards total utilitarianism. The similarity in names does not mean the concepts are on similarly rigorous foundations.
  2. Total utilitarianism is neither a simple, nor an elegant theory. In fact, it is under-defined and arbitrary.
  3. The most compelling argument for total utilitarianism (basically the one that establishes the repugnant conclusion), is a very long chain of imperfect reasoning, so there is no reason for the conclusion to be solid.
  4. Considering the preferences of non-existent beings does not establish total utilitarianism.
  5. When considering competing moral theories, total utilitarianism does not "win by default" thanks to its large values as the population increases.
  6. Population ethics is hard, just as normal ethics is.

 

A utility function does not compel total (or average) utilitarianism

There are strong reasons to suspect that the best decision process is one that maximises expected utility for a particular utility function. Any process that does not do so, leaves itself open to be money pumped or taken advantage of. This point has been reiterated again and again on Less Wrong, and rightly so.

Your utility function must be over states of the universe - but that's the only restriction. The theorem says nothing further about the content of your utility function. If you prefer a world with a trillion ecstatic super-humans to one with a septillion subsistence farmers - or vice versa - then as long you maximise your expected utility, the money pumps can't touch you, and the standard Bayesian arguments don't influence you to change your mind. Your values are fully rigorous.

For instance, in the torture vs dust speck scenario, average utilitarianism also compels you to take torture, as do a host of other possible utility functions. A lot of arguments around this subject, that may implicitly feel to be in favour of total utilitarianism, turn out to be nothing of the sort. For instance, avoiding scope insensitivity does not compel you to total utilitarianism, and you can perfectly allow birth-death asymmetries or similar intuitions, while remaining an expected utility maximiser.

 

Total utilitarianism is not simple nor elegant, but is arbitrary

Total utilitarianism is defined as maximising the sum of everyone's individual utility function. That's a simple definition. But what are these individual utility functions? Do people act like expected utility maximisers? In a word... no. In another word... NO. In yet another word... NO!

So what are these utilities? Are they the utility that the individuals "should have"? According to what and who's criteria? Is it "welfare"? How is that defined? Is it happiness? Again, how is that defined? Is it preferences? On what scale? And what if the individual disagrees with the utility they are supposed to have? What if their revealed preferences are different again?

There are (various different) ways to start resolving these problems, and philosophers have spent a lot of ink and time doing so. The point remains that total utilitarianism cannot claim to be a simple theory, if the objects that it sums over are so poorly and controversially defined.

And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular IUC method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.

Why then is it so popular? Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers. It gives good predictions - but it remains a model, with a domain of validity. You wouldn't conclude from that economic model that, say, mental illnesses don't exist. Similarly, modelling each life as having the same value and maximising expected lives saved is sensible and intuitive in many scenarios - but not necessarily all.

Maybe if we had a bit more information about the affected populations, we could use a more sophisticated model, such as one incorporating quality adjusted life years (QALY). Or maybe we could let other factors affect our thinking - what if we had to choose between saving a population of 1000 versus a population of 1001, of same average QALYs, but where the first set contained the entire Awá tribe/culture of 300 people, and the second is made up of representatives from much larger ethnic groups, much more culturally replaceable? Should we let that influence our decision? Well maybe we should, maybe we shouldn't, but it would be wrong to say "well, I would really like to save the Awá, but the model I settled on earlier won't allow me to, so I best follow the model". The models are there precisely to model our moral intuitions (the clue is in the name), not freeze them.

 

The repugnant conclusion is at the end of a flimsy chain

There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this:

  1. Start with a population of very happy/utilitied/welfared/preference satisfied people.
  2. Add other people whose lives are worth living, but whose average "utility" is less than that of the initial population.
  3. Redistribute "utility" in an egalitarian way across the whole population, increasing the average a little as you do so (but making sure the top rank have their utility lowered).
  4. Repeat as often as required.
  5. End up with a huge population whose lives are barely worth living.

If all these steps increase the quality of the outcome (and it seems intuitively that they do), then the end state much be better than the starting state, agreeing with total utilitarianism. So, what could go wrong with this reasoning? Well, as seen before, the term "utility" is very much undefined, as is its scale - hence egalitarian is extremely undefined. So this argument is not mathematically precise, its rigour is illusionary. And when you recast the argument in qualitative terms, as you must, it become much weaker.

Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game - anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined "more equal" society? More to the point, would you be sure that when iterating this process billions of times, every redistribution will be an improvement? This is a conjunctive statement, so you have to be nearly entirely certain of every link in the chain, if you want to believe the outcome. And, to reiterate, these links cannot be reduced to simple mathematical statements - you have to be certain that each step is qualitatively better than the previous one.

And you also have to be certain that your theory does not allow path dependency. One can take the perfectly valid position that "If there were an existing poorer population, then the right thing to do would be to redistribute wealth, and thus lose the last copy of Akira. However, currently there is no existing poor population, hence I would oppose it coming into being, precisely because it would result in the lose of Akira." You can reject this type of reasoning, and a variety of others that block the repugnant conclusion at some stage of the chain (the Stanford Encyclopaedia of Philosophy has a good entry on the Repugnant Conclusion and the arguments surrounding it). But most reasons for doing so already pre-suppose total utilitarianism. In that case, you cannot use the above as an argument for your theory.

 

Hypothetical beings have hypothetical (and complicated) things say to you

There is another major strand of argument for total utilitarianism, which claims that we owe it to non-existent beings to satisfy their preferences, that they would prefer to exist rather than remain non-existent, and hence we should bring them into existence. How does this argument fare?

First of all, it should be emphasised that one is free to accept or reject that argument without any fear of inconsistency. If one maintains that never-existent beings have no relevant preferences, then one will never stumble over a problem. They don't exist, they can't make decisions, they can't contradict anything. In order to raise them to the point where their decisions are relevant, one has to raise them to existence, in reality or in simulation. By the time they can answer "would you like to exist?", they already do, so you are talking about whether or not to kill them, not whether or not to let them exist.

But secondly, it seems that the "non-existent beings" argument is often advanced for the sole purpose of arguing for total utilitarianism, rather than as a defensible position in its own right. Rarely are its implication analysed. What would a proper theory of non-existent beings look like?

Well, for a start the whole happiness/utility/preference problem comes back with extra sting. It's hard enough to make a utility function out of real world people, but how to do so with hypothetical people? Is it an essentially arbitrary process (dependent entirely on "which types of people we think of first"), or is it done properly, teasing out the "choices" and "life experiences" of the hypotheticals? In that last case, if we do it in too much detail, we could argue that we've already created the being in simulation, so it comes back to the death issue.

But imagine that we've somehow extracted a utility function from the preferences of non-existent beings. Apparently, they would prefer to exist rather than not exist. But is this true? There are many people in the world who would prefer not to commit suicide, but would not mind much if external events ended their lives - they cling to life as a habit. Presumably non-existent versions of them "would not mind" remaining non-existent.

Even for those that would prefer to exist, we can ask questions about the intensity of that desire, and how it compares with their other desires. For instance, among these hypothetical beings, some would be mothers of hypothetical infants, leaders of hypothetical religions, inmates of hypothetical prisons, and would only prefer to exist if they could bring/couldn't bring the rest of their hypothetical world with them. But this is ridiculous - we can't bring the hypothetical world with them, they would grow up in ours - so are we only really talking about the preferences of hypothetical babies, or hypothetical (and non-conscious) foetuses?

If we do look at adults, bracketing the issue above, then we get some that would prefer that they not exist, as long as certain others do - or conversely that they not exist, as long as others also not exist. How should we take that into account? Assuming the universe infinite, any hypothetical being would exist somewhere. Is mere existence enough, or do we have to have a large measure or density of existence? Do we need them to exist close to us? Are their own preferences relevant - ie we only have a duty to bring into the world, those beings that would desire to exist in multiple copies everywhere? Or do we feel these have already "enough existence" and select the under-counted beings? What if very few hypothetical beings are total utilitarians - is that relevant?

On a more personal note, every time we make a decision, we eliminate a particular being. We can not longer be the person who took the other job offer, or read the other book at that time and place. As these differences accumulate, we diverge quite a bit from what we could have been. When we do so, do we feel that we're killing off these extra hypothetical beings? Why not? Should we be compelled to lead double lives, assuming two (or more) completely separate identities, to increase the number of beings in the world? If not, why not?

These are some of the questions that a theory of non-existent beings would have to grapple with, before it can become an "obvious" argument for total utilitarianism.

 

Moral uncertainty: total utilitarianism doesn't win by default

An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc... may have some validity, total utilitarianism wins as the population increases because the others don't scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.

But this is the wrong way to compare competing moral theories. Just as different people's utilities don't have a common scale, different moral utilities don't have a common scale. For instance, would you say that square-total utilitarianism is certainly wrong? This theory is simply total utilitarianism further multiplied by the population; it would correspond roughly to the number of connections between people. Or what about exponential-square-total utilitarianism? This would correspond roughly to the set of possible connections between people. As long as we think that exponential-square-total utilitarianism is not certainly completely wrong, then the same argument as above would show it quickly dominating as population increases.

Or what about 3^^^3 average utilitarianism - which is simply average utilitarianism, multiplied by 3^^^3? Obviously that example is silly - we know that rescaling shouldn't change anything about the theory. But similarly, dividing total utilitarianism by 3^^^3 shouldn't change anything, so total utilitarianism's scaling advantage is illusory.

As mentioned before, comparing different utility functions is a hard and subtle process. One method that seems to have surprisingly nice properties (to such an extent that I recommend always using as a first try) is to normalise the lowest possible attainable utility to zero, the highest attainable utility to one, multiply by the weight you give to the theory, and then add the normalised utilities together.

For instance, assume you equally valued average utilitarianism and total utilitarianism, giving them both weights of one (and you had solved all the definitional problems above). Among the choices you were facing, the worst outcome for both theories is an empty world. The best outcome for average utilitarianism would be ten people with an average "utility" of 100. The best outcome for total utilitarianism would be a quadrillion people with an average "utility" of 1. Then how would either of those compare to ten trillion people with an average utility of 60? Well, the normalised utility of this for the average utilitarian is 0.6, while for the total utilitarian it's also 60/100=0.6, and 0.6+0.6=1.2. This is better that the utility for the small world (1+10-9) or the large world (0.01+1), so it beats either of the extremal choices.

Extending this method, we can bring in such theories as exponential-square-total utilitarianism (probably with small weights!), without needing to fear that it will swamp all other moral theories. And with this normalisation (or similar ones), even small weights to moral theories such as "culture has some intrinsic value" will often prevent total utilitarianism from walking away with all of the marbles.

 

(Population) ethics is still hard

What is the conclusion? At Less Wrong, we're used to realising that ethics is hard, that value is fragile, that there is no single easy moral theory to safely program the AI with. But it seemed for a while that population ethics might be different - that there may be natural and easy ways to determine what to do with large groups, even though we couldn't decide what to do with individuals. I've argued strongly here that it's not the case - that population ethics remain hard, that we have to figure out what theory we want to have without access to easy shortcuts.

But in another way it's liberating. To those who are mainly total utilitarians but internally doubt that a world with infinitely many barely happy people surrounded by nothing but "muzak and potatoes" is really among the best of the best - well, you don't have to convince yourself of that. You may choose to believe it, or you may choose not to. No voice in the sky or in the math will force you either way. You can start putting together a moral theory that incorporates all your moral intuitions - those that drove you to total utilitarianism, and those that don't quite fit in that framework.

Comments (237)

Comment author: CarlShulman 25 June 2012 09:29:38PM 20 points [-]

For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich).

This empirical claim seems ludicrously wrong, which I find distracting from the ethical claims. Most people in rich countries (except for those unable or unwilling to work or produce kids who will) are increasing the rate of technological advance by creating demand for improved versions of products, paying taxes, contributing to the above-average local political cultures, and similar. Such advance dominates resource consumption in affecting the welfare of the global poor (and long-term welfare of future people). They make charitable donations or buy products that enrich people like Bill Gates and Warren Buffett who make highly effective donations, and pay taxes for international aid.

The scientists and farmers use thousands of products and infrastructure provided by the rest of society, and this neglects industry, resource extraction, and the many supporting sectors that make productivity in primary and secondary production so far (accountants, financial markets, policing, public health, firefighting...). Even "frivolous" sectors like Hollywood generate a lot of consumer surplus around the world (they watch Hollywood movies in sub-Saharan Africa), and sometimes create net rewards for working harder to afford more luxuries (sometimes they may encourage leisure too much by a utilitarian standard).

Regarding other points:

fact that you should follow a utility function in no way compel you towards total utilitarianism

Yes, this is silly equivocation exacerbated by the use of similar-sounding words for several concepts, and one does occasionally see people making this error.

interpersonal utility comparisons (IUC)

The whole piece assumes preference utilitarianism, but much of it also applies to hedonistic utilitarianism: you need to make seemingly-arbitrary choices in interpersonal happiness/pleasure comparison as well.

When considering competing moral theories, total utilitarianism does not "win by default" thanks to its large values as the population increases.

I agree.

The most compelling argument for total utilitarianism (basically the one that establishes the repugnant conclusion), is a very long chain of imperfect reasoning, so there is no reason for the conclusion to be solid. Considering the preferences of non-existent beings does not establish total utilitarianism.

Maybe just point to the Stanford Encyclopedia of Philosophy entry and a few standard sources on this? This has been covered very heavily by philosophers, if not ad nauseam.

Comment author: Lukas_Gloor 25 June 2012 10:24:22PM 3 points [-]

Whatever the piece assumes, I don't think it's preference utilitarianism because then the first sentence doesn't make sense:

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness.

Assuming most people have a preference to go on living, as well as various other preferences for the future, then killing them would violate all these preferences, and simply creating a new, equally happy being would still leave you with less overall utility, because all the unsatisfied preferences count negatively. (Or is there a version of preference utilitarianism where unsatisfied preferences don't count negatively?) The being would have to be substantially happier, or you'd need a lot more beings to make up for the unsatisfied preferences caused by the killing. Unless we're talking about beings that live "in the moment", where their preferences correspond to momentary hedonism.

Peter Singer wrote a chapter on killing and replaceability in Practical Ethics. His view is prior-existence, not total preference utilitarianism, but the points on replaceability apply to both.

Comment author: Stuart_Armstrong 26 June 2012 12:56:59PM 0 points [-]

Maybe just point to the Stanford Encyclopedia of Philosophy entry and a few standard sources on this? This has been covered very heavily by philosophers, if not ad nauseam.

Will add a link. But I haven't yet seen my particular angle of attack on the repugnant conclusion, and it isn't in the Stanford Encyclopaedia. The existence/non-existence seems to have more study, though.

Comment author: bryjnar 25 June 2012 05:54:11PM *  6 points [-]

A utility function does not compel total (or average) utilitarianism

Does anyone actually think this? Thinking that utility functions are the right way to talk about rationality !=> utilitarianism. Or any moral theory, as far as I can tell. I don't think I've seen anyone on LW actually arguing that implication, although I think most would affirm the antecedent.

There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this... If all these steps increase the quality of the outcome (and it seems intuitively that they do), then the end state much be better than the starting state, agreeing with total utilitarianism

This is the complete opposite of what I'd understood the point of that argument to be: as I understand it, it's claimed that the final state is clearly not of high utility, and so there is something wrong with total utilitarianism. Which is fine for what you're arguing, but you seem to have taken it a bit the wrong way around.

As for the mathematical rigour, there are some very nice impossibility theorems proved by Arrhenius (example) that make the kind of worries exemplified by the repugnant conclusion a lot more precise. They don't even require the problematic assumptions about utility functions that you point out: they're just about axiology (ranking possible outcomes). So they're actually independent problems for utilitarians.

I think a lot of the reason that utilitarians don't tend to feel terribly worried about the difficulty of interpersonal utility calculations is that we already do them. Every time you decide to let someone else have the last cookie because they'll enjoy it more, you just did a little IUC. Obviously, it's pretty unclear how to scale that up, but it gives a strong feeling that it ought to be possible, somehow.

Comment author: Yvain 28 June 2012 04:19:09AM *  4 points [-]

Upvoted, but as someone who, without quite being a total utilitarian, at least hopes someone might be able to rescue total utilitarianism, I don't find much to disagree with here. Points 1, 4, 5, and 6 are arguments against certain claims that total utilitarianism should be obviously true, but not arguments that it doesn't happen to be true.

Point 2 states that total utilitarianism won't magically implement itself and requires "technology" rather than philosophy; that is, people have to come up with specific contingent techniques of estimating utility, rather than just reading it off via a simple method which can be proven mathematically perfect. But we have some Stone Age utility-comparing technologies like money and the popular vote, and QALYs might be metaphorically a Bronze Age technology. I suppose I take it on faith that there's a lot of room for more advanced technology before we hit mathematical limits.

That leaves the introductory paragraph and Point 3 as the only places I still disagree:

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

In hedonic utilitarianism, yes. Are you making this claim for preference utilitarianism as well? If so, on what basis? If we don't give credit for creating potential people, isn't most people's preference not to be killed enough to stop preference utilitarians from killing them?

And you also have to be certain that your theory does not allow path dependency. One can take the perfectly valid position that "If there were an existing poorer population, then the right thing to do would be to redistribute wealth, and thus lose the last copy of Akira. However, currently there is no existing poor population, hence I would oppose it coming into being, precisely because it would result in the lose of Akira." You can reject this type of reasoning, and a variety of others that block the repugnant conclusion at some stage of the chain (the Stanford Encyclopaedia of Philosophy has a good entry on the Repugnant Conclusion and the arguments surrounding it). But most reasons for doing so already pre-suppose total utilitarianism. In that case, you cannot use the above as an argument for your theory.

Can you explain this further? If we don't allow potential people to carry weight, and if we are preference rather than hedonic utilitarians, then the only thing we are checking when deciding to create all these new people is whether or not existing people would prefer to do so.

The fact that the repugnant conclusion has "repugnant" right in the name suggests that most people don't want it. Therefore if total utilitarianism is about satisfying the preferences of as many people as possible much as possible, and it results in a conclusion nobody prefers, that should be a red flag.

If existing people understand the repugnant conclusion, then they will understand it is a likely consequence of creating all these people is that the world loses most of its culture and happiness, and when we aggregate their preferences they will vote against doing so.

So I don't see what you mean when you say this reasoning "pre-supposes total utiltarianism". It presupposes people's intuitive moral preferences for a happy world full of culture to a just-barely-not-unhappy-world without, and it pretends we can solve the aggregation problem, but where's the vicious self-reference?

Comment author: Lukas_Gloor 28 June 2012 03:03:25PM 5 points [-]

If we don't allow potential people to carry weight, and if we are preference rather than hedonic utilitarians, then the only thing we are checking when deciding to create all these new people is whether or not existing people would prefer to do so.

That's Peter Singer's view, prior-existence instead of total. A problem here seems to be that creating a being in intense suffering would be ethically neutral, and if even the slightest preference for doing so exists, and if there were no resource trade-offs in regard to other preferences, then creating that miserable being would be the right thing to do. One can argue that in the first millisecond after creating the miserable being, one would be obliged to kill it, and that, foreseeing this, one ought not have created it in the first place. But that seems not very elegant. And one could further imagine creating the being somewhere unreachable, where it's impossible to kill it afterwards.

One can avoid this conclusion by axiomatically stating that it is bad to bring into existence a being with a "life not worth living". But that still leaves problems, for one thing, it seems ad hoc, and for another, it would then not matter whether one brings a happy child into existence or one with a neutral life, and that again seems highly counterintuitive.

The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You'd end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence. Depending on how much pre-existing beings want to have children, it wouldn't necessarily entail complete anti-natalism, but the overall goal would at some point be a universe without unsatisfied preferences. Or is there another way out?

Comment author: Yvain 29 June 2012 12:19:07AM 2 points [-]

Thank you. Apparently total utilitarianism really is scary, and I had routed around it by replacing it with something more useable and assuming that was what everyone else meant when they said "total utilitarianism".

Comment author: Ghatanathoah 15 September 2013 06:38:13AM *  1 point [-]

The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You'd end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence.

A potential major problem with this approach has occurred to me, namely, the fact that people tend to have infinite or near infinite preferences. We always want more. I don't see anything wrong with that, but it does create headaches for the ethical system under discussion.

The human race's insatiable desires makes negative total preference-utilitarianism vulnerable to an interesting variant of the various problems of infinity in ethics. Once you've created a person, who then dies, it is impossible to do any more harm. There's already an infinite amount of unsatisfied preferences in the world from their existence and death. Creating more people will result in the same total amount of unsatisfied preferences as before: infinity. This would render negative utilitarianism as always indifferent to whether one should create more people, obviously not what we want.

Even if you posit that our preferences are not infinite, but merely very large, this still runs into problems. I think most people, even anti-natalists, would agree that it is sometimes acceptable to create a new person in order to prevent the suffering of existing people. For instance, I think even an antinatalist would be willing to create one person who will live a life with what an upper-class 21st Century American would consider a "normal" amount of suffering, if doing so would prevent 7 billion people from being tortured for 50 years. But if you posit that the new person has a very large, but not infinite amount of preferences (say, a googol) then it's still possible for the badness of creating them to outweigh the torture of all those people. Again, not what we want.

Hedonic negative utilitarianism doesn't have this problem, but it's even worse, it implies we should painlessly kill everyone ASAP! Since most antinatalists I know believe death to be a negative thing, rather than a neutral thing, they must be at least partial preference utilitarians.

Now, I'm sure that negative utilitarians have some way around this problem. There wouldn't be so many passionate advocates for it if it could be killed by a logical conundrum like this. But I can't find any discussion of this problem after doing some searching on the topic. I'm really curious to know what the proposed solution is, and would appreciate it if someone told me.

Comment author: Mark_Lu 28 June 2012 08:30:19PM 1 point [-]

A problem here seems to be that creating a being in intense suffering would be ethically neutral

Well don't existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?

Comment author: Lukas_Gloor 28 June 2012 08:47:12PM 2 points [-]

Sure, existing people tend to have such preferences. But hypothetically it's possible that they didn't, and the mere possibility is enough to bring down an ethical theory if you can show that it would generate absurd results.

Comment author: Mark_Lu 28 June 2012 09:10:44PM *  1 point [-]

This might be one reason why Eliezer talks about morality as a fixed computation.

P.S. Also, doesn't the being itself have a preference for not-suffering?

Comment author: Ghatanathoah 17 September 2012 06:39:29AM *  -1 points [-]

Or is there another way out?

One possibility might be phrasing it as "Maximize preference satisfaction for everyone who exists and ever will exist, but not for everyone who could possibly exist.."

This captures the intuition that it is bad to create people who have low levels of preference satisfaction, even if they don't exist yet and hence can't object to being created, while preserving the belief that existing people have a right to not create new people whose existence would seriously interfere with their desires. It does this without implying anti-natalism. I admit that the phrasing is a little clunky and needs refinement, and I'm sure a clever enough UFAI could find some way to screw it up, but I think it's a big step towards resolving the issues you point out.

EDIT: Another possibility that I thought of is setting "creating new worthwhile lives" and "improving already worthwhile lives" as two separate values that have diminishing returns relative to each other. This is still vulnerable to some forms of repugant-conclusion type arguments, but it totally eliminates what I think is the most repugnant aspect of the RC - the idea that a Malthusian society might be morally optimal.

Comment author: Stuart_Armstrong 28 June 2012 12:10:34PM *  1 point [-]

I suppose I take it on faith that there's a lot of room for more advanced technology before we hit mathematical limits.

Yes, yes, much progress can (and will) be made fomalising our intuitions. But we don't need to assume ahead of time that the progress will take the form of "better individual utilities and definition of summation" rather than "other ways of doing population ethics".

In hedonic utilitarianism, yes. Are you making this claim for preference utilitarianism as well? If so, on what basis? If we don't give credit for creating potential people, isn't most people's preference not to be killed enough to stop preference utilitarians from killing them?

Yes, the act is not morally neutral in preference utilitarianism. In those cases, we'd have to talk about how many people we'd have to create with satisficiable preferences, to compensate for that one death. You might not give credit for creating potential people, but preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour.

If existing people understand the repugnant conclusion, then they will understand it is a likely consequence of creating all these people is that the world loses most of its culture and happiness, and when we aggregate their preferences they will vote against doing so.

This is not preference total utilitarianism. It's something like "satisfying the maximal amount of preferences of currently existing people". In fact, it's closer to preference average utilitarianism (satisfy the current majority preference) that to total utilitarianism (probably not exactly that either; maybe a little more path dependency).

So I don't see what you mean when you say this reasoning "pre-supposes total utiltarianism".

Most reasons for rejecting the reasoning that blocks the repugnant conclusion pre-suppose total utiltarianism. Without the double negative: most justifications of the repugnant conclusion pre-suppose total utilitarianism.

Comment author: Mark_Lu 28 June 2012 12:58:27PM 4 points [-]

preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour

Shouldn't we then just create people with simpler and easier to satisfy preferences so that there's more preference-satisfying in the world?

Comment author: Lukas_Gloor 28 June 2012 02:48:47PM 0 points [-]

Indeed, that's a very counterintuitive conclusion. It's the reason why most preference-utilitarians I know hold a prior-existence view.

Comment author: [deleted] 09 July 2012 09:57:56PM 0 points [-]

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

In hedonic utilitarianism, yes.

Even in hedonistic utilitarianism, it is an almost misleading simplification. There are crucial differences between killing a person and not birthing a new one: Most importantly, one is seen as breaking the social covenant of non-violence, while the other is not. One disrupts pre-existing social networks, the other does not. One destroys an experienced educated brain, the other does not. Endorsing one causes social distrust and strife in ways the other does not.

A better claim might be: It is morally neutral in hedonistic utilitarianism to create a perfect copy of a person and painlessly and unexpectedly destroy the original. It's a more accurate claim, and I personally would accept it.

Comment author: Ghatanathoah 16 October 2013 08:19:09PM -1 points [-]

Even in hedonistic utilitarianism, it is an almost misleading simplification. There are crucial differences between killing a person and not birthing a new one: Most importantly, one is seen as breaking the social covenant of non-violence, while the other is not. One disrupts pre-existing social networks, the other does not. One destroys an experienced educated brain, the other does not. Endorsing one causes social distrust and strife in ways the other does not.

These are all practical considerations. Most people believe it is wrong in principle to kill someone and replace them with a being of comparable happiness. You don't see people going around saying:

"Look at that moderately happy person. It sure is too bad that it's impractical to kill them and replace them with a slightly happier person. The world would be a lot better if that were possible."

I also doubt that an aversion to violence is what prevents people from endorsing replacement either. You don't see people going around saying:

"Man, I sure wish that person would get killed in a tornado or a car accident. Then I could replace them without breaking any social covenants."

I believe that people reject replacement because they see it as a bad consequence, not because of any practical or deontological considerations. I wholeheartedly endorse such a rejection.

A better claim might be: It is morally neutral in hedonistic utilitarianism to create a perfect copy of a person and painlessly and unexpectedly destroy the original. It's a more accurate claim, and I personally would accept it.

The reason that that claim seems acceptable is because, under many understandings of how personal identity works, if a copy of someone exists, they aren't really dead. You killed a piece of them, but there's still another piece left alive. As long as your memories, personality, and values continue to exist you still live.

The OP makes it clear that what they mean is that total utilitarianism (hedonic and otherwise) maintains that it is morally neutral to kill someone and replace them with a completely different person who has totally different memories, personality, and values, providing the second person is of comparable happiness to the first. I believe any moral theory that produces this result ought to be rejected.

Comment author: orbenn 27 June 2012 05:19:02PM 3 points [-]

The problem here seems to be about the theories not taking all things we value into account. It's therefore less certain whether their functions actually match our morals. If you calculate utility using only some of your utility values, you're not going to get the correct result. If you're trying to sum the set {1,2,3,4} but you only use 1, 2 and 4 in the calculation, you're going to get the wrong answer. Outside of special cases like "multiply each item by zero" it doesn't matter whether you add, subtract or divide, the answer will still be wrong. For example the calculations given for total utilitarianism fail to include values for continuity of experience.

This isn't to say that ethics are easy, but we're going to have a devil of a time testing them with impoverished input.

Comment author: steven0461 25 June 2012 07:29:37PM *  8 points [-]

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness.

I stopped reading here. To me, "total utilitarianism" means maximizing the sum of the values of individual lives. There's nothing forcing a total utilitarian to value a life by adding the happiness experienced in each moment of the life, without further regard to how the moments fit together (e.g. whether they fulfill someone's age-old hopes).

In general, people seem to mean different things by "utilitarianism", so any criticism needs to spell out what version of utilitarianism it's attacking, and acknowledge that the particular version of utilitarianism may not include everyone who self-identifies as a utilitarian.

Comment author: Lukas_Gloor 25 June 2012 08:28:43PM *  -1 points [-]

But isn't the "values of individual lives" preference-utilitarianism (which often comes as prior-existence instead of "total")? I'm confused, it seems like there are several definitions criculating. I haven't encountered this kind of total utilitarianism on the felicifia utilitarianism forum. The quoted conclusion about killing people and replacing people is accurate, according the definition that is familiar to me.

Comment author: steven0461 25 June 2012 08:36:57PM 0 points [-]

But isn't the "values of individual lives" preference-utilitarianism

Not unless the value of a life is proportional to the extent to which the person's preferences are satisfied.

The quoted conclusion about killing people and replacing people is accurate, according the definition that is familiar to me.

What would you call the view I mentioned, if not total utilitarianism?

Comment author: Lukas_Gloor 25 June 2012 08:47:10PM *  -1 points [-]

Sounds like total preference-utiltiarianism, instead of total hedonistic utilitarianism. Would this view imply that it is good to create beings whose preferences are satisfied? If yes, then it's total PU. If no, then it might be prior-existence PU. The original article doesn't specify explicitly whether it means hedonistic or preference utilitarianism, but the example given about killing only works for hedonistic utilitarianism, that's why I assumed that this is what's meant. However, somewhere else in the article, it says

Total utilitarianism is defined as maximising the sum of everyone's individual utility function.

And that seems more like preference-utilitarianism again. So something doesn't work out here.

As a side note, I've actually never encountered a total preference-utilitarian, only prior-existence ones (like Peter Singer). But it's a consistent position.

Comment author: steven0461 25 June 2012 09:51:52PM 1 point [-]

But it's not preference utilitarianism. In evaluating whether someone leads a good life, I care about whether they're happy, and I care about whether their preferences are satisfied, but those aren't the only things I care about. For example, I might think it's a bad thing if a person lives the same day over and over again, even it's what the person wants and it makes the person happy. (Of course, it's a small step from there to concluding it's a bad idea when different people have the same experiences, and that sort of value is hard to incorporate into any total utilitarian framework.)

Comment author: Will_Newsome 27 June 2012 01:53:29AM *  3 points [-]

I think you might want to not call your ethical theory utilitarianism. Aquinas' ethics also emphasize the importance of the common welfare and loving thy neighbor as thyself, yet AFAIK no one calls his ethics utilitarian.

Comment author: steven0461 27 June 2012 04:15:33AM *  3 points [-]

I think maybe the purest statement of utilitarianism is that it pursues "the greatest good for the greatest number". The word "for" is important here. Something that improves your quality of life is good for you. Clippy might think (issues of rigid designators in metaethics aside) that paperclips are good without having a concept of whether they're good for anyone, so he's a consequentialist but not a utilitarian. An egoist has a concept of things being good for people, but chooses only those things that are good for himself, not for the greatest number; so an egoist is also a consequentialist but not a utilitarian. But there's a pretty wide range of possible concepts of what's good for an individual, and I think that entire range should be compatible with the term "utilitarian".

Comment author: steven0461 27 June 2012 02:29:23AM *  2 points [-]

It doesn't make sense to me to count maximization of total X as "utilitarianism" if X is pleasure or if X is preference satisfaction but not if X is some other measure of quality of life. It doesn't seem like that would cut reality at the joints. I don't necessarily hold the position I described, but I think most criticisms of it are misguided, and it's natural enough to deserve a short name.

Comment author: Lukas_Gloor 25 June 2012 10:35:44PM 0 points [-]

I see, interesting. That means you bring in a notion independent of both the person's experiences and preferences. You bring in a particular view on value (e.g. that life shouldn't be repetitious). I'd just call this a consequentialist theory where the exact values would have to be specified in the description, instead of utilitarianism. But that's just semantics, as you said initially, it's important that we specify what exactly we're talking about.

Comment author: Vladimir_M 26 June 2012 01:30:36AM *  9 points [-]

There is no natural scale on which to compare utility functions. [...] Unless your theory comes with a particular [interpersonal utility comparison] method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.

This, in my opinion, is by itself a decisive argument against utilitarianism. Without these ghostly "utilities" that are supposed to be measurable and comparable interpersonally, the whole concept doesn't even being to make sense. And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.

Note that the problem is much more fundamental than just the mathematical difficulties and counter-intuitive implications of formal utilitarian theories. Even if there were no such problems, it would still be the case that the whole theory rests on an entirely imaginary foundation. Ultimately, it's a system that postulates some metaphysical entities and a categorical moral imperative stated in terms of the supposed state of these entities. Why would we privilege that over systems that postulate metaphysical entities and associated categorical imperatives of different kinds, like e.g. traditional religions?

(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I'd be extremely curious to hear it.)

Comment author: Jayson_Virissimo 26 June 2012 05:21:09AM 3 points [-]

(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I'd be extremely curious to hear it.)

I asked about this before in the context of one of Julia Galef's posts about utilitarian puzzles and received several responses. What is your evaluation of the responses (personally, I was very underwhelmed)?

Comment author: Vladimir_M 26 June 2012 06:36:50AM *  2 points [-]

The only reasonable attempt at a response in that sub-thread is this comment. I don't think the argument works, though. The problem is not just disagreement between different people's intuitions, but also the fact that humans don't do anything like utility comparisons when it comes to decisions that affect other people. What people do in reality is intuitive folk ethics, which is basically virtue ethics, and has very little concern with utility comparisons.

That said, there are indeed some intuitions about utility comparison, but they are far too weak, underspecified, and inconsistent to serve as basis for extracting an interpersonal utility function, even if we ignore disagreements between people.

Comment author: Will_Sawin 26 June 2012 10:08:58PM 0 points [-]

Intuitive utilitarian ethics are very helpful in everyday life.

Comment author: Salemicus 29 June 2012 12:52:35AM *  3 points [-]

There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career... familiar kind of moral dilemma. Asking his colleague for advice, he got told "Just maximise total utility." "Come on," he is supposed to have replied, "this is serious!"

I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide <i>a practical framework for addressing the problem</i>, let alone a potential answer.

Comment author: Will_Newsome 01 July 2012 11:56:49PM 3 points [-]

That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.

Comment author: gwern 29 June 2012 01:10:26AM 2 points [-]
Comment author: Will_Sawin 29 June 2012 02:52:36AM 0 points [-]

Writing out costs and benefits is a technique that is sometimes helpful.

Comment author: Salemicus 30 June 2012 02:32:16PM 1 point [-]

Sure, but "costs" and "benefits" are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden.

In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don't necessarily get much value (doesn't really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.

Comment author: Will_Sawin 30 June 2012 04:45:34PM 0 points [-]

Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.

Comment author: Salemicus 30 June 2012 08:21:26PM 2 points [-]

Writing down costs and benefits is clearly an application of consequentialism ethics.

No, because "costs" and "benefits" are value-laden terms.

Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city's ruler has forbidden it. So I take your advice and write down costs and benefits. Costs - breaching my duty to obey the law, punishment for me, possible reigniting of the city's civil war. Benefits - upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven't committed to any ethical system, all I've done is clarify what's at stake. For example, if I'm a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I'm a virtue ethicist, perhaps this shows it's about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I'm just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I'll experience knowing my brother's corpse is being eaten by crows?

Ironically, the only person this doesn't help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits - "maximise utility" is a slogan, not a procedure.

Comment author: Will_Sawin 30 June 2012 10:40:05PM 3 points [-]

What are you arguing here? First you argue that "just maximize utility" is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics.

Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one's attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant.

Are you arguing anything else?

Comment author: Vladimir_M 26 June 2012 11:09:46PM 2 points [-]

Could you provide some concrete examples?

Comment author: Will_Sawin 29 June 2012 02:48:49AM 3 points [-]

I am thinking about petty personal disputes, say if one person finds something that another person does annoying. A common gut reaction is to immediately start staking territory about what is just and what is virtuous and so on, while the correct thing to do is focus on concrete benefits and costs of actions. The main reason this is better is not because it maximizes utility but because it minimizes argumentativeness.

Another good example is competition for a resource. Sometimes one feels like one deserves a fair share and this is very important, but if you have no special need for it, nor are there significant diminishing marginal returns, then it's really not that big of a deal.

In general, intuitive deontological tendencies can be jerks sometimes, and utilitarianism fights that.

Comment author: army1987 27 June 2012 09:04:34PM 0 points [-]
Comment author: Viliam_Bur 26 June 2012 08:43:42AM 1 point [-]

Thanks for the link, I am very underwhelmed too.

If I understand it correctly, one suggestion is equivalent to choosing some X, and re-scaling everyone's utility function so that X has value 1. Obvious problem is the arbitrary choice of X, and the fact that in some people's original scale X may have positive, negative, or zero value.

The other suggestion is equivalent to choosing a hypothetical person P with infinite empathy towards all people, and using the utility function of P as absolute utility. I am not sure about this, but seems to me that the result depends on P's own preferences, and this cannot be fixed because without preferences there could be no empathy.

Comment author: private_messaging 26 June 2012 08:00:28AM 2 points [-]

And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.

Yes. To be honest it looks like local version of reductionism takes the 'everything is reducible' in declarative sense, declaring that concepts it uses are reducible regardless of their reducibility.

Comment author: David_Gerard 26 June 2012 01:35:04PM 4 points [-]
Comment author: private_messaging 26 June 2012 03:05:37PM *  1 point [-]

Thanks! That's spot on. It's what I think much of those 'utility functions' here are. Number of paperclips in the universe, too. Haven't seen anything like that reduced to formal definition of any kind.

The way humans actually decide on actions, is by evaluating the world-difference that the action causes in world-model, everything being very partial depending to available time. The probabilities are rarely possible to employ in the world model because of the combinatorial space exploding real hard. (also, Bayesian propagation on arbitrary graphs is np-complete, in very practical way of being computationally expensive). Hence there isn't some utility function deep inside governing the choices. Doing the best is mostly about putting limited computing time to best use.

Then there's some odd use of abstractions - like, every agent can be represented with utility function therefore whatever we talk about utilities is relevant. Never mind that this utility function is trivial 1 for doing what agent chooses 0 otherwise and everything just gets tautological.

Comment author: Lukas_Gloor 26 June 2012 01:40:59PM 0 points [-]

This, in my opinion, is by itself a decisive argument against utilitarianism.

You mean against preference-utilitarianism.

The vast majority of utilitarians I know are hedonistic utilitarians, where this criticism doesn't apply at all. (For some reason LW seems to be totally focused on preference-utilitarianism, as I've noticed by now.) As for the criticism itself: I agree! Preference-utiltiarians can come up with sensible estimates and intuitive judgements, but when you actually try to show that in theory there is one right answer, you just find a huge mess.

Comment author: Jayson_Virissimo 27 June 2012 03:10:56AM *  5 points [-]

I agree. I'm fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons and that hedonic utilitarianism can escape the conceptual problems inherent in preference utilitarianism. On the other hand, I do not want to maximize (my) hedons (for these kinds of reasons, among others).

Comment author: CarlShulman 29 June 2012 05:07:59AM 3 points [-]

we will have the technology to accurately measure and sum hedons

Err...what? Technology will tell you things about how brains (and computer programs) vary, but not which differences to count as "more pleasure" or "less pleasure." If evaluations of pleasure happen over 10x as many neurons is there 10x as much pleasure? Or is it the causal-functional role pleasure plays in determining the behavior of a body? What if we connect many brains or programs to different sorts of virtual bodies? Probabilistically?

A rule to get a cardinal measure of pleasure across brains is going to require almost as much specification as a broader preference measure. Dualists can think of this as guesstimating "psychophysical laws" and physicalists can think of it as seeking reflective equilibrium in our stances towards different physical systems, but it's not going to be "read out" of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.

Comment author: torekp 29 June 2012 11:55:07PM 0 points [-]

but it's not going to be "read out" of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.

Sure, but I don't think we can predict that there will be a lot of room for deciding those philosophy of mind questions whichever way one wants to. One simply has to wait for the research results to come in. With more data to constrain the interpretations, the number and spread of plausible stable reflective equilibria might be very small.

I agree with Jayson that it is not mandatory or wise to maximize hedons. And that is because hedons are not the only valuable things. But they do constitute one valuable category. And in seeking them, the total utilitarians are closer to the right approach than the average utilitarians (I will argue in a separate reply).

Comment author: David_Gerard 27 June 2012 10:45:45AM 0 points [-]

I'm fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons

OK, I've got to ask: what's your confidence based in, in detail? It's not clear to me that "sum hedons" even means anything.

Comment author: Vladimir_M 27 June 2012 01:09:16AM 0 points [-]

Why do you believe that interpersonal comparison of pleasure is straightforward? To me this doesn't seem to be the case.

Comment author: Lukas_Gloor 27 June 2012 02:50:06AM 2 points [-]

Is intrapersonal comparison possible? Personal boundaries don't matter for hedonistic utilitarianism, they only matter insofar as you may have spatio-temporally connected clusters of hedons (lives). The difficulties in comparison seem to be of an empirical nature, not a fundamental one (unlike the problems with preference-utilitarianism). If we had a good enough theory of consciousness, we could quantitatively describe the possible states of consciousness and their hedonic tones. Or not?

One common argument against hedonistic utiltiarianism is that there are "different kinds of pleasures", and that they are "incommensurable". But if that we're the case, it would be irrational to accept a trade-off of the lowest pleasure of one sort for the highest pleasure of another sort, and no one would actually claim that. So even if pleasures "differ in kind", there'd be an empirical trade-off value based on how pleasant the hedonic states actually are.

Comment author: Mark_Lu 27 June 2012 09:00:53AM 0 points [-]

Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we'd be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.

Comment author: Vladimir_M 27 June 2012 02:59:03PM 4 points [-]

You make it sound as if there is some signal or register in the brain whose value represents "pleasure" in a straightforward way. To me it seems much more plausible that "pleasure" reduces to a multitude of variables that can't be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared.

That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.

Comment author: Lukas_Gloor 27 June 2012 05:50:35PM 1 point [-]

The main problem I see with it is that it implies wireheading as the optimal outcome.

Or the utilitronium shockwave, rather. Which doesn't even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I've come to think of it as a perfectly reasonable thing.

Comment author: TheOtherDave 27 June 2012 03:30:53PM 0 points [-]

The main problem I see with it is that it implies wireheading as the optimal outcome.

AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading.

Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..)

I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.

Comment author: shminux 26 June 2012 08:36:20PM 0 points [-]

Hedonistic utilitarianism ("what matters is the aggregate happiness") runs into the same repugnant conclusion.

Comment author: Lightwave 26 June 2012 08:48:26PM 0 points [-]

But this happens exactly because interpersonal (hedonistic) utility comparison is possible.

Comment author: shminux 26 June 2012 09:25:11PM 0 points [-]

Right, if you cannot compare utilities, you are safe from the repugnant conclusion.

On the other hand, this is not very useful instrumentally, as a functioning society necessarily requires arbitration of individual wants. Thus some utilities must be comparable, even if others might not be. Finding a boundary between the two runs into the standard problem of two nearly identical preferences being qualitatively different.

Comment author: Ghatanathoah 29 September 2012 06:56:40AM *  0 points [-]

(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I'd be extremely curious to hear it.)

I wonder if I am misunderstanding what you are asking, because interpersonal utility comparison seems like an easy thing that people do every day, using our inborn systems for sympathy and empathy.

When I am trying to make a decision that involves the conflicting desires of myself and another person; I generally use empathy to put myself in their shoes and try to think about desires that I have that are probably similar to theirs. Then I compare how strong those two desires of mine are and base my decision on that. Now, obviously I don't make all ethical decisions like that, there are many where I just follow common rules of thumb. But I do make some decisions in this fashion, and it seems quite workable, the more fair-minded of my acquaintances don't really complain about it unless they think I've made a mistake. Obviously it has scaling problems when attempting to base any type of utilitarian ethics on it, but I don't think they are insurmountable.

Now, of course you could object that this method is unreliable, and ask whether I really know for sure if other people's desires are that similar to mine. But this seems to me to just be a variant of the age-old problem of skepticism and doesn't really deserve any more attention than the possibility that all the people I meet are illusions created by an evil demon. It's infinitesimally possible that everyone I know doesn't really have mental states similar to mine at all, that in fact they are all really robot drones controlled by a non-conscious AI that is basing their behavior on a giant lookup table. But it seems much more likely that other people are conscious human beings with mental states similar to mine that can be modeled and compared via empathy, and that this allows me to compare their utilities.

In fact, it's hard to understand how empathy and sympathy could have evolved if it they weren't reasonably good at interpersonal utility comparison. If interpersonal utility comparison was truly impossible then anyone who tried to use empathy to inform their behavior towards others would end up being disastrously wrong at figuring out how to properly treat others, find themselves grievously offending the rest of their tribe, and would hence likely have their genes for empathy selected against. It seems like if interpersonal utility comparison was impossible humans would have never evolved the ability or desire to make decisions based on empathy.

I am also curious as to why you refer to to utility as "ghostly." It seems to me that utility is commonly defined as the sum of the various desires and feelings that people have. Desires and feelings are computations and other processes in our brains, which are very solid real physical objects. So it seems like utility is at least as real as software. Of course, it's entirely possible that you are using the word "utility" to refer to a slightly different concept than I am and that is where my confusion is coming from.

Comment author: wedrifid 27 June 2012 01:55:58PM *  6 points [-]

A smaller critique of total utilitarianism:

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

You can just finish there.

(In case the "sufficient cause to reject total utilitarianism" isn't clear: I don't like murder. Total utilitarianism advocates it in all sorts of scenarios that I would not. Therefore, total utilitarianism is Evil.)

Comment author: Stuart_Armstrong 28 June 2012 11:42:49AM *  4 points [-]

You can just finish there.

:-) I kinda did. The rest was just "there are no strong countervailing reasons to reject that intuition".

Comment author: wedrifid 28 June 2012 12:59:52PM 1 point [-]

:-) I kinda did. The rest was just "there are no strong countervailing reasons to reject that intuition".

Excellent post then. I kind of stopped after the first line so I'll take your word for the rest!

Comment author: private_messaging 27 June 2012 08:44:06PM 1 point [-]

Agreed completely. This goes for any utilitarianism where the worth of changing from state A to state B is f(B)-f(A) . Morality is about transitions; even hedonism is, as happiness is nothing if it is frozen solid.

Comment author: army1987 27 June 2012 09:02:23PM *  2 points [-]

happiness is nothing if it is frozen solid

I'd take A and B in the equation above to include momentums as well as positions? :-)

Comment author: private_messaging 27 June 2012 09:10:44PM *  1 point [-]

That's a good escape but only for specific laws of physics... what do you do about brain sim on computer? It has multiple CPUs going over calculating next state from current state in parallel, and it doesn't care about how CPU is physically implemented, but it does care how many experience-steps it has. edit: i.e. i mean, transition from one happy state to other happy state that is equally a happy state is what a moment of being happy is about. the total utilitarianism boils down to zero utility of an update pass on a happy brain sim. It's completely broken. edit: and with simple workarounds, it boils down to zero utility of switching the current/next state arrays, so that you sit in a loop recalculating same next state from static current state.

Comment author: Lukas_Gloor 25 June 2012 08:18:10PM 6 points [-]

What seems to be overlooked in most discussions about total hedonistic utiltiarianism is that the proponents often have a specific (Parfitean) view about personal identity. Which leads to either empty or open individualism. Based on that, they may hold that it is no more rational to care about one's own future self than it is to care about any other future self. "Killing" a being would then just be failing to let a new moment of consciousness come into existence. And any notions of "preferences" would not really make sense anymore, only instrumentally.

Comment author: Kaj_Sotala 27 June 2012 09:34:28AM 1 point [-]

I'm increasingly coming to hold this view, where the amount and quality of experience-moments is all that matters, and I'm glad to see someone else spell it out.

Comment author: private_messaging 26 June 2012 09:28:32PM *  4 points [-]

There's how I see this issue (from philosophical point of view):

Moral value is, in the most general form, a function of a state of a structure, for lack of better word. The structure may be just 10 neurons in isolation, for which the moral worth may well be exact zero, or it may be 7 billion blobs of about 10^11 neurons who communicate with each other, or it may be a lot of data on a hard drive, representing a stored upload.

The moral value of two interconnected structures, in general, does not equal the sum of moral value of each structure (example: whole brain vs piece of brain, a mind on redundant hardware). Moral value of whole can (in general) be greater or less than sum of moral values of the parts. Note that I have not defined anything specific at all here, I just specified very general considerations. We have developed somewhat ad hoc approximations to some sort of ideal moral worth.

edit: Note. The moral worth of an action is in general a function of state without the action and state with the action, not necessarily difference in moral worth of one state, and moral worth of other state.

The utilitarianism of the N dustspecks worse than torture variety takes as fundamental and the ideal plenty of assumptions such as that moral worth would be distributive like W(a .. b) = W(a) + W(b) , to which we have clear counter-examples when the parts are strongly interconnected (e.g. 2 hemispheres of a brain) or correlated (double redundant hardware) but which may hold approximately for people due to them not being strongly interconnected. With very large N, clearly broken premise is taken to their extreme and then proclaimed normative, while the approximations that aren't linear are proclaimed wrong.

Comment author: Lukas_Gloor 25 June 2012 07:55:54PM 4 points [-]

Total utilitarianism is defined as maximising the sum of everyone's individual utility function.

That seems misleading. Most of the time "total utiltiarianism" refers to what should actually be called "hedonistic total utilitarianism". And what is maximized there is the suprlus of happiness over suffering (positive hedonic states over negative ones), which isn't necessarily synonymous with individual utility functions.

There are three different parameters for the various kinds of utilitarianism: It can either be total or average or prior-existence. Then it can be negative or classical (and in theory also "positive", even though that would be insane, forcing people to accept eternal torture if there's even the slightest chance of a moment of happiness). And then utiltiarianism can also be hedonistic or preference. Most common, and subject to this article, is (classical) total hedonistic utiltiarianism. While some combinations make very little sense, a lot of them actually have advocates. (For instance, recently someone published a paper advocating "negative average preference-utilitarianism".)

Comment author: endoself 25 June 2012 08:47:07PM 5 points [-]

and in theory also "positive", even though that would be insane, forcing people to accept eternal torture if there's even the slightest chance of a moment of happiness

There exist people who profess that they would choose to be tortured for the rest of their lives with no chance of happiness rather than being killed instantly, so this intuition could be more than theoretically possible. People tend to be surprised by the extent to which intuitions differ.

Comment author: RichardKennaway 26 June 2012 12:27:40PM *  2 points [-]

For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations

What is happiness? If happiness is the "utility" that people maximise (is it?), and the richer are only slightly happier than the poorer (cite?), why is it that when people have the opportunity to vote with their feet, people in poor nations flock to richer nations whenever they can, and do not want to return?

Comment author: Stuart_Armstrong 26 June 2012 06:00:51PM 1 point [-]

There's a variety of good literature on the subject (one key component is that people are abysmally bad at estimating their future levels of happiness). There are always uncertainties in defining happiness (as with anything), but there's a clear consensus that whatever is making people move countries, actual happiness levels is not it.

(now, expected happiness levels might be it; or, more simply, that people want a lot of things, and that happiness is just one of them)

Comment author: Nornagest 26 June 2012 04:12:13AM *  2 points [-]

An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc... may have some validity, total utilitarianism wins as the population increases because the others don't scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.

I'll admit I haven't encountered this argument before, but to me it looks like a type error. As you note, average utilitarianism counts something quite different than total utilitarianism; observers might (correctly) note that the latter can spit out much larger numbers than the former under some circumstances, but those values are unrelated abstractions, not something commensurate with each other's or those of other ethical theories absent a quantifying theory of metaethics that we don't have. It's like dividing seven by cucumber. I'd argue that the normalization process you suggest doesn't make much sense either, though; many utilitarianisms don't have well-defined upper bounds (why stop at a quadrillion?), and some don't have well-defined lower (a life not worth living might be counted as a negative contribution).

Insofar as ethical theories are models of our ethical intuitions, I can see an argument for normalizing against people's subjective satisfaction with a world-state, which is almost certainly a finite range and therefore implies some kind of diminishing returns or dynamic rather than static evaluation of state changes. But I can see arguments against this, too; in particular, it doesn't make any sense if you're trying to make a universalizable theory of ethics (which has its own problems, but it has been tried). The hedonic treadmill also raises issues.

Comment author: Larks 25 June 2012 08:08:50PM 2 points [-]

Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers.

I think you actually slightly understate the case against Utilitarianism. Yes, Classical Economics uses expected utility maximisers - but it prefers to deal with Pareto Improvements (or Kaldor-Hicks improvements) than try to do inter-personal utility comparisons.

Comment author: shminux 26 June 2012 08:22:08PM *  3 points [-]

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

Just wanted to note that this is too strong a statement. There is no requirement for the 1:1 ratio in "total utilitarianism". You end up with the "repugnant conclusion" to the Parfit's "mere addition" argument as long as this ratio is finite (known as "birth-death asymmetry"). For example, one may argue that killing 1 person to save 5 equally happy people is wrong, because killing is wrong, but as long as there is a ratio they would agree with (or, more generally, an equivalent number of saved people for each number of killed people), the repugnant conclusion argument still goes through.

Comment author: Stuart_Armstrong 27 June 2012 09:12:07AM *  2 points [-]

I was more thinking of a total asymmetry rather that a ratio. But yes, if you have a finite ratio, then you have the repugnant conclusion (even though it's not total utilitarianism unless the ratio is 1:1).

Comment author: Lukas_Gloor 27 June 2012 05:56:22PM *  1 point [-]

Exactly! I've been pointing this out too. If you assume preference utilitarianism, then killing counts as wrong, at least if the beings you kill want to stay alive further (or have detailed future plans even). So the replecement only works if you increase the number of the new beings, or make them have more satisfied preferences. The rest of the argument still works, but this is important to point out.

Comment author: army1987 26 June 2012 08:06:28AM *  3 points [-]

You know, I've felt that examining the dust speck vs torture dilemma or stuff like that, finding a way to derive an intuitively false conclusion from intuitively true premises, and thereby concluding that the conclusion must be true after all (rather than there's some kind of flaw in the proof you can't see yet) is analogous to seeing a proof that 0 equals 1 or that a hamburger is better than eternal happiness or that no feather is dark, not seeing the mistake in the proof straight away, and thereby concluding that the conclusion must be true. Does anyone else feel the same?

Comment author: TheOtherDave 26 June 2012 01:54:27PM 11 points [-]

Sure.

But it's not like continuing to endorse my intuitions in the absence of any justification for them, on the assumption that all arguments that run counter to my intuitions, however solid they may seem, must be wrong because my intuitions say so, is noticeably more admirable.

When my intuitions point in one direction and my reason points in another, my preference is to endorse neither direction until I've thought through the problem more carefully. What I find often happens is that on careful thought, my whole understanding of the problem tends to alter, after which I may end up rejecting both of those directions.

Comment author: private_messaging 27 June 2012 08:35:25AM *  0 points [-]

Well, what you should do, is to recognize that such arguments themselves are built entirely out of intuitions, and their validity rest on conjunction of a significant number of often unstated intuitive assumptions. One should not fall for cargo cult imitation of logic.

There's no fundamental reason why value should be linear in number of dust specks; it's nothing but an assumption which may be your personal intuition, but it is still intuition that lacks any justification what so ever, and in so much as it is an uncommon intuition, it even lacks the "if it was wrong it would be debunked" sort of justification. There's always the Dunning Kruger effect. People least capable of moral (or any) reasoning should be expected to think themselves most capable.

Comment author: MarkusRamikin 27 June 2012 08:56:59AM *  2 points [-]

There's no fundamental reason why value should be linear in number of dust specks

Yeah, that has always been my main problem with that scenario.

There are different ways to sum multiple sources of something. Consider linear vs paralel electrical circuits; the total output depends greatly on how you count the individual voltage sources (or resistors or whatever).

When it comes to suffering, well suffering only exists in consciousness, and each point of consciousness - each mind involved - experiences their own dust speck individually. There is no conscious mind in that scenario who is directly experiencing the totality of the dust specks and suffers accordingly. It is in no way obvious to me that the "right" way to consider the totality of that suffering is to just add it up. Perhaps it is. But unless I missed something, no one arguing for torture so far has actually shown it (as opposed to just assuming it).

Suppose we make this about (what starts as) a single person. Suppose that you, yourself, are going to be copied into all that humongous number of copies. And you are given a choice: before that happens, you will be tortured for 50 years. Or you will be unconscious for 50 years, but after copying each of your copies will get a dust speck in the eye. Either way you get copied, that's not part of the choice. After that, whatever your choice, you will be able to continue with your lives.

In that case, I don't care about doing the "right" math that will make people call me rational, I care about being the agent who is happily NOT writhing in pain with 50 years more of it ahead of him.

EDIT: come to think of it, assume the copying template is taken from you before the 50 years start, so we don't have to consider memories and lasting psychological effects of torture. My answer remains the same, even if in future I won't remember the torture, I don't want to go through it.

Comment author: TheOtherDave 27 June 2012 01:48:51PM 0 points [-]

As far as I know, TvDS doesn't assume that value is linear in dust specks. As you say, there are different ways to sum multiple sources of something. In particular, there are many ways to sum the experiences of multiple individuals.

For example, the whole problem evaporates if I decide that people's suffering only matters to the extent that I personally know those people. In fact, much less ridiculous problems also evaporate... e.g., in that case I also prefer that thousands of people suffer so that I and my friends can live lives of ease, as long as the suffering hordes are sufficiently far away.

It is not obvious to me that I prefer that second way of thinking, though.

Comment author: David_Gerard 27 June 2012 03:27:26PM 2 points [-]

e.g., in that case I also prefer that thousands of people suffer so that I and my friends can live lives of ease, as long as the suffering hordes are sufficiently far away.

It is arguable (in terms of revealed preferences) that first-worlders typically do prefer that. This requires a slightly non-normative meaning of "prefer", but a very useful one.

Comment author: TheOtherDave 27 June 2012 03:34:42PM *  2 points [-]

Oh, absolutely. I chose the example with that in mind.

I merely assert that "but that leads to thousands of people suffering!" is not a ridiculous moral problem for people (like me) who reveal such preferences to consider, and it's not obvious that a model that causes the problem to evaporate is one that I endorse.

Comment author: private_messaging 27 June 2012 03:47:07PM *  0 points [-]

Well, it sure uses linear intuition. 3^^^3 is bigger than number of distinct states, its far past point where you are only increasing exactly-duplicated dust speck experience, so you could reasonably expect it to flat out.

One can go perverse and proclaims that one treats duplicates the same, but then if there's a button which you press to replace everyone's mind with mind of happiest person, you should press it.

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition. Simulation of pinprick slowed down 1000000 times is not ultra long torture. The 'murder' is a form of irreversible state transition. The morality as it exist is about state transitions not about states.

Comment author: Mark_Lu 27 June 2012 04:29:11PM -1 points [-]

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition.

"State" doesn't have to mean "frozen state" or something similar, it could mean "state of the world/universe". E.g. "a state of the universe" in which many people are being tortured includes the torture process in it's description. I think this is how it's normally used.

Comment author: private_messaging 27 June 2012 04:38:20PM *  -1 points [-]

Well, if you are to coherently take it that the transitions have value, rather than states, then you arrive at morality that regulates the transitions that the agent should try to make happen, ending up with morality that is more about means than about ends.

I think it's simply that the pain feels like a state rather than dynamic process, and so utilitarianism treats it as state, while doing something feels like a dynamic process, so utilitarianism doesn't treat it as state and is only concerned with difference in utilities.

Comment author: TheOtherDave 27 June 2012 04:05:21PM 0 points [-]

It isn't clear to me what the phrase "exactly-duplicated" is doing there. Is there a reason to believe that each individual dust-speck-in-eye event is exactly like every other? And if so, what difference does that make? (Relatedly, is there a reason to believe that each individual moment of torture is different from all the others? If it turns out that it's not, does that imply something relevant?)

In any case, I certainly agree that one could reasonably expect the negvalue of suffering to flatten out no matter how much of it there is. It seems unlikely to me that fifty years of torture is anywhere near the asymptote of that curve, though... for example, I would rather be tortured for fifty years than be tortured for seventy years.

But even if it somehow is at the asymptotic limit, we could recast the problem with ten years of torture instead, or five years, or five months, or some other value that is no longer at that limit, and the same questions would arise.

So, no, I don't think the TvDS problem depends on intuitions about the linear-additive nature of suffering. (Indeed, the more i think about it the less convinced i am that I have such intuitions, as opposed to approaches-a-limit intuitions. This is perhaps because thinking about it has changed my intuitions.)

Comment author: private_messaging 27 June 2012 04:19:58PM *  -3 points [-]

I was referring to linear-additive nature of dust speck so called suffering, in the number of people with dust specks.

3^^^3 is far far larger than number of distinct mind states of anything human-like. You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum. I recall i posted about that a while back. You shouldn't be multiplying anything with 3^^^3 .

TBH, my 'common sense' explanation as of why EY chooses to adopt torture > dust specks stance (i say chooses because it is entirely up to grabs here plus his position is fairly incoherent), is because he seriously believes that his work has non negligible chance of influencing lives of an enormous number of people, and subsequently if he can internalize the torture>dust specks, he is free to rationalize any sort of thing he can plausibly do, even if AI extinction risk does not exist.

Comment author: TheOtherDave 27 June 2012 04:40:29PM *  0 points [-]

[edit: this response was to an earlier version of the above comment, before it was edited. Some of it is no longer especially apposite to the comment as it exists now.]

I was referring to linear-additive nature of dust specks.

Well, I agree that 3^^^3 dust specks don't quite add linearly... long before you reach that ridiculous mass, I expect you get all manner of weird effects that I'm not physicist enough to predict. And I also agree that our intuitions are that dust specks add linearly.

But surely it's not the dust-specks that we care about here, but the suffering? That is, it seems clear to me that if we eliminated all the dust specks from the scenario and replaced them with something that caused an equally negligible amount of suffering, we would not be changing anything that mattered about the scenario.

And, as I said, it's not at all clear to me that I intuit linear addition of suffering (whether it's caused by dust-specks, torture, or something else), or that the scenario depends on assuming linear addition of suffering. It merely depends on assuming that addition of multiple negligible amounts of suffering can lead to an aggregate-suffering result that is commensurable with, and greater than, a single non-negligible amount of suffering.

It's not clear to me that this assumption holds, but the linear-addition objection seems like a red herring to me.

You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum.

Ah, I see.

Yeah, sure, there's only X possible ways for a human to be (whether 10^(10^20) or some other vast number doesn't really matter), and there's only Y possible ways for a dust speck to be, and there's only Z possible ways for a given human to experience a given dust speck in their eye. So, sure, we only have (XYZ) distinct dust-speck-in-eye events, and if (XYZ) << 3^^^3 then there's some duplication. Indeed, there's vast amounts of duplication, given that (3^^^3/(XYZ)) is still a staggeringly huge number.

Agreed.

I'm still curious about what difference that makes.

Comment author: private_messaging 27 June 2012 04:55:12PM *  -3 points [-]

Well, some difference that it should make:

Lead to severe discounting of the 'reasoning method' that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue. In all fields where it was employed. And to severely discount anything that came from that process previously. If it failed even though it gone against intuition, it's even more worthless when it goes along with intuition.

I get the feeling that attempts to 'logically' deliberate on morality from some simple principles like "utility" are similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic. If someone haven't got visual cortex they can't see, even if they do insane amount of reasoning deliberately.

Comment author: TheOtherDave 27 June 2012 01:25:59PM *  0 points [-]

Agreed that all of these sorts of arguments ultimately rest on different intuitions about morality, which sometimes conflict, or seem to conflict.

Agreed that value needn't add linearly, and indeed my intuition is that it probably doesn't.

It seems clear to me that if I negatively value something happening, I also negatively value it happening more more. That is, for any X I don't want to have happen, it seems I would rather have X happen than have X happen twice. I can't imagine an X where I don't want X to happen and would prefer to have X happen twice than once. (Barring silly examples like "the power switch for the torture device gets flipped".)

Comment author: APMason 27 June 2012 01:41:58PM 0 points [-]

Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?

Comment author: wedrifid 27 June 2012 01:53:52PM 1 point [-]

Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?

Nothing, iif that happens to be be what your actual preferences are. If your preferences did not happen to be as you describe but instead you are confused by an inconsistency in your intuitions then you will make incorrect decisions.

The challenge is not to construct a utility function such that you can justify it to others in the face of opposition. The challenge is to work out what your actual preferences are and implement them.

Comment author: TheOtherDave 27 June 2012 02:47:11PM 1 point [-]

The challenge is to work out what your actual preferences are and implement them.

Ayup. Also, it may be worth saying explicitly that a lot of the difficulty comes in working out a model of my actual preferences that is internally consistent and can be extended to apply to novel situations. If I give up those constraints, it's easier to come up with propositions that seem to model my preferences, because they approximate particular aspects of my preferences well enough that in certain situations I can't tell the difference. And if I don't ever try to make decisions outside of that narrow band of situations, that can be enough to satisfy me.

Comment author: Lukas_Gloor 27 June 2012 05:44:45PM *  -1 points [-]

The challenge is to work out what your actual preferences are and implement them.

[Edited to separate from quote] But doesn't that beg the question? Don't you have to ask a the meta question "what kinds of preferences are reasonable to have?" Why should we shape ethics the way evolution happened to set up our values? That's why I favor hedonistic utiltiarianism that is about actual states of the world that can in itself be bad (--> suffering).

Comment author: TheOtherDave 27 June 2012 06:02:31PM *  1 point [-]

Note that markup requires a blank line between your quote and the rest of the topic.

It does beg a question: specifically, the question of whether I ought to implement my preferences (or some approximation of them) in the first place. If, for example, my preferences are instead irrelevant to what I ought to do, then time spent working out my preferences is time that could better have been spent doing something else.

All of that said, it sounds like you're suggesting that suffering is somehow unrelated to the way evolution set up our values. If that is what you're suggesting, then I'm completely at a loss to understand either your model of what suffering is, or how evolution works.

Comment author: Lukas_Gloor 27 June 2012 06:10:58PM 0 points [-]

The fact that suffering feels awful is about the very thing, and nothing else. There's no valuing required, no being ask itself "should I dislike this experience" when it is in suffering. It wouldn't be suffering otherwise.

My position implies that in a world without suffering (or happiness, if I were not a negative utiltiarian), nothing would matter.

Comment author: TheOtherDave 27 June 2012 01:53:14PM 1 point [-]

Depends on what I'm trying to do.

If I make that assumption, then it follows that given enough Torture to approach its limit, I choose any number of Dust Specks rather than that amount of Torture.

If my goal is to come up with an algorithm that leads to that choice, then I've succeeded.

(I think talking about Torture and Dust Specks as terminal values is silly, but it isn't necessary for what I think you're trying to get at.)

Comment author: Lukas_Gloor 27 June 2012 02:18:18PM 0 points [-]

That's been done in this paper, secion VI "The Asymptotic Gambit".

Comment author: APMason 27 June 2012 02:29:13PM *  0 points [-]

Thank you. I had expected the bottom to drop out of it somehow.

EDIT: Although come to think of it I'm not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between "kill 1 person, prevent 1000 mutilations" + "kill 1 person, prevent 1000 mutilations" and "kill 2 people, prevent 2000 mutilations".) Will have to think on it some more.

Comment author: MBlume 27 June 2012 06:29:27PM 2 points [-]

Does anyone else feel the same?

Nope! Some proofs are better-supported than others.

Comment author: RichardKennaway 26 June 2012 08:19:35AM 2 points [-]

Yes. The known unreliability of my own thought processes tempers my confidence in any prima facie absurd conclusion I come to. All the more so when it's a conclusion I didn't come to, but merely followed along with someone else's argument to.

Comment author: private_messaging 26 June 2012 08:19:05AM *  2 points [-]

I feel this way. The linear theories are usually nothing but first order approximations.

Also, the very idea of summing of individual agent utilities... that's, frankly, nothing but pseudomathematics. Each agent's utility function can be modified without changing agent's behaviour in any way. The utility function is a phantom. It isn't so defined that you could add two of them together. You can map same agent's preferences (whenever they are well ordered) to infinite variety of real valued 'utility functions'.

Comment author: David_Gerard 26 June 2012 12:54:36PM *  7 points [-]

Yes. The trouble with "shut up and multiply" - beyond assuming that humans have a utility function at all - is assuming that utility works like conventional arithmetic and that you can in fact multiply.

There's also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes "scope insensitivity." The error is assuming this means that people are scope-insensitive, rather than to realise that people aren't buying saved birds at all, but are paying what they're willing to pay for warm fuzzies in general - a constant amount.

The attraction of utilitarianism is that calculating actions would be so much simpler if utility functions existed and their output could be added with the same sort of rules as conventional arithmetic. This does not, however, constitute non-negligible evidence that any of the required assumptions hold.

Comment author: Gabriel 26 June 2012 05:21:12PM 1 point [-]

There's also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes "scope insensitivity." The error is assuming this means that people are scope-insensitive, rather than to realise that people aren't buying saved birds at all, but are paying what they're willing to pay for warm fuzzies in general - a constant amount.

I don't know who's making that error. Seems like scope insensitivity and purchasing of warm fuzzies are usually discussed together around here.

Anyway, if there's an error here then it isn't about utilitarianism vs something else, but about declared vs revealed preference. The people believe that they care about the birds. They don't act as if they cared about the birds. For those who accept deliberative reasoning as an expression of human values it's a failure of decision-making intuitions and it's called scope insensitivity. For those who believe that true preference is revealed through behavior it's a failure of reflection. None of those positions seems inconsistent with utilitarianism. In fact it might be easier to be a total utilitarian if you go all the way and conclude that humans really care only about power and sex. Just give everybody nymphomania and megalomania, prohibit birth control and watch that utility counter go. ;)

Comment author: David_Gerard 26 June 2012 04:57:39PM *  0 points [-]

An explanatory reply from the downvoter would be useful. I'd like to think I could learn.

Comment author: private_messaging 26 June 2012 02:46:43PM *  0 points [-]

I don't think it's even linearly combinable. Suppose there were 4 copies of me total, pair doing some identical thing, other pair doing 2 different things. The second pair is worth more. When I see someone go linear on morals, that strikes me as evidence of poverty of moral value and/or poverty of mathematical language they have available.

Then the consequentialism. The consequences are hard to track - got to model the worlds resulting from uncertain initial state. Really really computationally expensive. Everything is going to use heuristics, even jupiter brains.

There's also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes "scope insensitivity." The error is assuming this means that people are scope-insensitive, rather than to realise that people aren't buying saved birds at all, but are paying what they're willing to pay for warm fuzzies in general - a constant amount.

Well, "willing to pay for warm fuzzies" is a bad way to put it IMO. There's limited amount of money available in the first place, if you care about birds rather than warm fuzzies that doesn't make you a billionaire.

Comment author: army1987 28 June 2012 12:46:55PM *  0 points [-]

Well, "willing to pay for warm fuzzies" is a bad way to put it IMO. There's limited amount of money available in the first place, if you care about birds rather than warm fuzzies that doesn't make you a billionaire.

The figures people would pay to save 2000, 20,000, or 200,000 birds were $80, $78 and $88 respectively, which oughtn't be so much that the utility of money for most WEIRD people would be significantly non-linear. (A much stronger effect IMO could be people taking --possibly subconsiously-- the “2000” or the “20,000” as evidence about the total population of that bird species.)

Comment author: RichardKennaway 26 June 2012 10:42:03PM 1 point [-]

This does not, however, constitute non-negligible evidence that any of the required assumptions hold.

It even tends to count against it, by the A+B rule. If items are selected by a high enough combined score on two criteria A and B, then among the selected items, there will tend to be a negative correlation between A and B.

Comment author: Gabriel 26 June 2012 06:22:04PM *  -2 points [-]

Utilitarians don't have to sum different utility functions. An utilitarian has an utility function that happens to be defined as a sum of intermediate values assigned to each individual. Those intermediate values are also (confusingly) referred to as utility but they don't come from evaluating any of the infinite variety of 'true' utility functions of every individual. They come from evaluating the total utilitarian's model of individual preference satisfaction (or happiness or whatever).

Or at least it seems to me that it should be that way. If I see a simple technical problem that doesn't really affect the spirit of the argument then the best thing to do is to fix the problem and move on. If total utilitarianism really is commonly defined as summing every individual's utility function then that is silly but it's a problem of confused terminology and not really a strong argument against utilitarianism.

Comment author: David_Gerard 26 June 2012 09:39:50PM *  3 points [-]

But the spirit of the argument is ungrounded in anything. What evidence is there that you can do this stuff at all using actual numbers without repeatedly bumping into "don't do non-normative things even if you got that answer from a shut-up-and-multiply"?

Comment author: private_messaging 26 June 2012 08:49:41PM *  0 points [-]

Well and then you can have model where the model of individual is sad when the real individual is happy and vice versa, and there would be no problem with that.

You got to ground the symbols somewhere. The model has to be defined to approximate reality for it to make sense, and for the model to approximate reality it has to somehow process individual's internal state.

Comment author: David_Gerard 26 June 2012 11:22:05AM *  0 points [-]

Yes. The error is that humans aren't good at utilitarianism.

private_messaging has given an example elsewhere: the trouble with utilitarians is that they think they are utilitarians. They then use numbers to convince themselves to do something they would otherwise consider evil.

The Soviet Union was an attempt to build a Friendly government based on utilitarianism. They quickly reached "shoot someone versus dust specks" and went for shooting people.

They weren't that good at lesser utilitarian decisions either, tending to ignore how humans actually behaved in favour of taking their theories and shutting-up-and-multiplying. Then when that didn't work, they did it harder.

I'm sure someone objecting to the Soviet Union example as non-negligible evidence can come up with examples that worked out much better, of course.

Comment author: CarlShulman 29 June 2012 05:52:33AM *  4 points [-]

See Eliezer's Ethical Injunctions post.

Also Bryan Caplan:

The key difference between a normal utilitarian and a Leninist: When a normal utilitarian concludes that mass murder would maximize social utility, he checks his work! He goes over his calculations with a fine-tooth comb, hoping to discover a way to implement beneficial policy changes without horrific atrocities. The Leninist, in contrast, reasons backwards from the atrocities that emotionally inspire him to the utilitarian argument that morally justifies his atrocities.

If this seems woefully uncharitable, compare the amount of time a proto-Leninist like Raskolnikov spends lovingly reviewing the mere conceivability of morally justified bloodbaths to the amount of time he spends (a) empirically evaluating the effects of policies or (b) searching for less brutal ways to implement whatever policies he wants. These ratios are typical for the entire Russian radical tradition; it's what they imagined to be "profound." When men like this gained power in Russia, they did precisely what you'd expect: treat mass murder like a panacea. This is the banality of Leninism.

Comment author: David_Gerard 29 June 2012 07:14:47AM *  5 points [-]

As I have noted, when you've repeatedly emphasised "shut up and multiply", tacking "btw don't do anything weird" on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to. If arithmetical utilitarianism works so well, it would work in weird territory.

Caplan does have a cultural point on the Soviet Union example. OTOH, it does seem a bit "no true utilitarian".

Comment author: CarlShulman 29 June 2012 09:10:23AM *  18 points [-]

If arithmetical utilitarianism works so well, it would work in weird territory.

Note the bank robbery thread below. Someone claims that "the utilitarian math" shows that robbing banks and donating to charity would have the best consequences. But they don't do any math or look up basic statistics to do a Fermi calculation. A few minutes of effort shows that bank robbery actually pays much worse than working as a bank teller over the course of a career (including jail time, etc).

In Giving What We Can there are several people who donate half their income (or all income above a Western middle class standard of living) to highly efficient charities helping people in the developing world. They expect to donate millions of dollars over their careers, and to have large effects on others through their examples and reputations, both as individuals and via their impact on organizations like Giving What We Can. They do try to actually work things out, and basic calculations easily show that running around stealing organs or robbing banks would have terrible consequences, thanks to strong empirical regularities:

  1. Crime mostly doesn't pay. Bank robbers, drug dealers, and the like make less than legitimate careers. They also spend a big chunk of time imprisoned, and ruin their employability for the future. Very talented people who might do better than the average criminal can instead go to Wall Street or Silicon Valley and make far more.

  2. Enormous amounts of good can be done through a normal legitimate career. Committing violent crime, or other hated acts close off such opportunities very rapidly.

  3. Really dedicated do-gooders hope to have most of their influence through example, encouraging others to do good. Becoming a hated criminal, and associating their ethical views with such, should be expected to have huge negative effects by staunching the flow of do-gooders to exploit the vast legitimate opportunities to help people.

  4. If some criminal scheme looks easy and low-risk, consider that law enforcement uses many techniques which are not made public, and are very hard for a lone individual to learn. There are honey-pots, confederates, and so forth. In the market for nuclear materials, most of the buyers and sellers are law enforcement agents trying to capture any real criminal participants. In North America terrorist cells are now regularly infiltrated long before they act, with government informants insinuated into the cell, phone and internet activities monitored, etc.

  5. It is hard to keep a crime secret over time. People feel terrible guilt, and often are caught after they confess to others. In the medium term there is some chance of more effective neuroscience-based lie detectors, which goes still higher long-term.

  6. The broader society, over time, could punish utilitarian villainy by reducing its support for the things utilitarians seek as they are associated with villains, or even by producing utilitarian evils. If animal rights terrorists tried to kill off humanity, it might lead to angry people eating more meat or creating anti-utilitronium (by the terrorists' standards, not so much the broader society, focused on animals, say) in anger. The 9/11 attacks were not good for Osama bin Laden's ambitions of ruling Saudi Arabia.

There are other considerations, but these are enough to dispense with the vast bestiary of supposedly utility-boosting sorts of wrongdoing. Arithmetical utilitarianism does say you should not try to become a crook. But unstable or vicious people (see the Caplan Leninist link) sometimes do like to take the idea of "the end justifies the means" as an excuse to go commit crimes without even trying to work out how the means are related to the end, and to alternatives.

Disclaimer: I do not value total welfare to the exclusion of other ethical and personal concerns. My moral feelings oppose deontological nastiness aside from aggregate welfare. But I am tired of straw-manning "estimating consequences" and "utilitarian math" by giving examples where these aren't used and would have prevented the evil conclusion supposedly attributed to them.

Comment author: cousin_it 18 July 2012 10:39:11AM *  1 point [-]

I'm confused. Your comment paints a picture of a super-efficient police force that infiltrates criminal groups long before they act. But the Internet seems to say that many gangs in the US operate openly for years, control whole neighborhoods, and have their own Wikipedia pages...

Comment author: ciphergoth 18 July 2012 12:55:30PM 4 points [-]

The gangs do well, and the rare criminals who become successful gang leaders may sometimes do well, but does the average gangster do well?

Comment author: CarlShulman 18 July 2012 04:54:52PM 3 points [-]
  • Gang membership still doesn't pay relative to regular jobs
  • The police largely know who is in the gangs, and can crack down if this becomes a higher priority
  • Terrorism is such a priority, to a degree way out of line with the average historical damage, because of 9/11; many have critiqued the diversion of law enforcement resources to terrorism
  • Such levels of gang control are concentrated in poor areas with less police funding, and areas where the police are estranged from the populace, limiting police activity.
  • Gang violence is heavily directed at other criminal gangs, reducing the enthusiasm of law enforcement, relative to more photogenic victims
Comment author: Decius 18 July 2012 06:36:44PM 0 points [-]

The other side is that robbing banks at gunpoint isn't the most effective way to redistribute wealth from those who have it to those to whom it should go.

I suspect that the most efficient way to do that is government seizure- declare that the privately held assets of the bank now belong to the charities. That doesn't work, because the money isn't value, it's a signifier of value, and rewriting the map does not change the territory- if money is forcibly redistributed too much, it loses too much value and the only way to enforce the tax collection is by using the threat of prison and execution- but the jailors and executioners can only be paid by the taxes. Effectively robbing banks to give the money to charity harms everyone significantly, and fails to be better than doing nothing.

Comment author: wedrifid 29 June 2012 07:40:10AM 2 points [-]

It may have been better if CarlShulman used a different word - perhaps 'Evil' - to represent the 'ethical injunctions' idea. That seems to better represent the whole "deliberately subvert consequentialist reasoning in certain areas due to acknowledgement of corrupted and bounded hardware". 'Weird' seems to be exactly the sort of thing Eliezer might advocate. For example "make yourself into a corpsicle" and "donate to SingInst".

Comment author: David_Gerard 29 June 2012 08:02:27AM 0 points [-]

But, of course, "weird" versus "evil" is not even broadly agreed upon.

And "weird" includes many things Eliezer advocates, but I would be very surprised if it did not include things that Eliezer most certainly would not advocate.

Comment author: wedrifid 29 June 2012 02:10:30PM 2 points [-]

And "weird" includes many things Eliezer advocates, but I would be very surprised if it did not include things that Eliezer most certainly would not advocate.

Of course it does. For example: dressing up as a penguin and beating people to death with a live fish. But that's largely irrelevant. Rejecting 'weird' as the class of things that must never be done is not the same thing as saying that all things in that class must be done. Instead, weirdness is just ignored.

Comment author: Dolores1984 29 June 2012 07:41:50AM -1 points [-]

I've always felt that post was very suspect. Because, if you do the utilitarian math, robbing banks and giving them to charity is still a good deal, even if there's a very low chance of it working. Your own welfare simply doesn't play a factor, given the size of the variables you're playing with. It seems to be that there is a deeper moral reason not to murder organ donors or steal food for the hungry than 'it might end poorly for you.'

Comment author: CarlShulman 29 June 2012 07:54:57AM *  11 points [-]

Because, if you do the utilitarian math, robbing banks and giving them to charity is still a good deal

Bank robbery is actually unprofitable. Even setting aside reputation (personal and for one's ethos), "what if others reasoned similarly," the negative consequences of the robbery, and so forth you'd generate more expected income working an honest job. This isn't a coincidence. Bank robbery hurts banks, insurers, and ultimately bank customers, and so they are willing to pay to make it unprofitable.

According to a study by British researchers Barry Reilly, Neil Rickman and Robert Witt written up in this month's issue of the journal Significance, the average take from a U.S. bank robbery is $4,330. To put that in perspective, PayScale.com says bank tellers can earn as much as $28,205 annually. So, a bank robber would have to knock over more than six banks, facing increasing risk with each robbery, in a year to match the salary of the tellers he's holding up.

Comment author: Dolores1984 29 June 2012 08:33:17AM 0 points [-]

That was a somewhat lazy example, I admit, but consider the most inconvenient possible world. Let's say you could expect to take a great deal more from a bank robbery. Would it then be valid utilitarian ethics to rob (indirectly) from the rich (us) to give to the poor?

Comment author: CarlShulman 29 June 2012 09:25:37AM *  7 points [-]

My whole point in the comments on this post has been that it's a pernicious practice to use such false examples. They leave erroneous impressions and associations. A world where bank-robbery is super-profitable, so profitable as to outweigh the effects of reputation and the like, is not very coherent.

A better example would be something like: "would utilitarians support raising taxes to fund malaria eradication," or "would a utilitarian who somehow inherited swoopo.com (a dollar auction site) shut down the site or use the revenue to save kids from malaria" or "if a utilitarian inherited the throne in a monarchy like Oman (without the consent of the people) would he spend tax revenues on international good causes or return them to the taxpayers?"

Comment author: MarkusRamikin 29 June 2012 07:45:23AM *  6 points [-]

if you do the utilitarian math, robbing banks and giving them to charity is still a good deal

Only if you're bad at math. Banks aren't just piggybanks to smash, they perform a useful function in the economy, and to disrupt it has consequences.

Of course I prefer to defeat bad utilitarian math with better utilitarian math rather than with ethical injunctions. But hey, that's the woe of bounded reason, even without going into the whole corrupted hardware problem: your model is only so good, and heuristics that serve as warning signals have their place.

Comment author: Lukas_Gloor 26 June 2012 01:33:34PM 0 points [-]

Yes. The error is that humans aren't good at utilitarianism.

Why would that be an error? It's not a requirement for an ethical theory that Homo sapiens must be good at it. If we notice that humans are bad at it, maybe we should make AI or posthumans that are better at it, if we truly view this as the best ethical theory. Besides, if the outcome of people following utilitarianism is really that bad, then utilitarianism would demand (it gets meta now) that people should follow some other theory that overall has better outcomes (see also Parfit's Reasons and Persons). Another solution is Hare's proposed "Two-Level Utilitarianism". From Wikipedia:

Hare proposed that on a day to day basis, one should think and act like a rule utilitarian and follow a set of intuitive prima facie rules, in order to avoid human error and bias influencing one's decision-making, and thus avoiding the problems that affected act utilitarianism.

Comment author: David_Gerard 26 June 2012 01:38:05PM *  1 point [-]

The error is that it's humans who are attempting to implement the utilitarianism. I'm not talking about hypothetical non-human intelligences, and I don't think they were implied in the context.

Comment author: private_messaging 27 June 2012 08:21:59AM *  2 points [-]

I don't think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it's magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).

Comment author: fubarobfusco 26 June 2012 07:25:38PM 0 points [-]

See also Ends Don't Justify Means (Among Humans): having non-consequentialist rules (e.g. "Thou shalt not murder, even if it seems like a good idea") can be consequentially desirable since we're not capable of being ideal consequentialists.

Comment author: David_Gerard 26 June 2012 09:37:54PM *  7 points [-]

Oh, indeed. But when you've repeatedly emphasised "shut up and multiply", tacking "btw don't do anything weird" on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.

Comment author: private_messaging 26 June 2012 11:48:44AM *  0 points [-]

Well, those examples would have a lot of "okay we can't calculate utility here, so we'll use a principle" and far less faith in direct utilitarianism.

With the torture and dust specks, see, it arrives at counter intuitive conclusion, but it is not proof grade reasoning by any means. Who knows, maybe the correct algorithm for evaluation of torture vs dust specks must have BusyBeaver(10) for the torture, and BusyBeaver(9) for dust specks, or something equally outrageously huge (after all, thought, which is being screwed with by torture, is turing-complete). The 3^^^3 is not a very big number. There are numbers which are big like you wouldn't believe.

edit: also I think even vastly superhuman entities wouldn't be very good at consequence evaluation, especially from uncertain start state. In any case, some sorta morality oracle would have to be able to, at very least, take in full specs of human brain and then spit out the understanding of how to trade off the extreme pain of that individual, for dust speck of that individual (at task which may well end up in ultra long computations BusyBeaver(1E10) style. Forget the puny up arrow). That's an enormously huge problem which the torture-choosers obviously a: haven't done and b: didn't even comprehend that something like this would be needed. Which brings us to the final point: the utilitarians are the people whom haven't slightest clue what it might take to make an utilitarian decision, but are unaware of that deficiency. edit: and also, I would likely take 1/3^^^3 chance of torture over a dust speck. Why? Because dust speck may result in an accident leading up to decades of torturous existence. Dust speck's own value is still non comparable, it only bothers me because it creates the risk.

edit: note, the busy beaver reference is just an example. Before you can be additively operating on dust specks and pain, and start doing some utilitarian math there, you have to at least understand how the hell is it that an algorithm can be feeling pain, what is the pain exactly (in reductionist terms).

Comment author: army1987 28 June 2012 12:39:33PM 3 points [-]

and also, I would likely take 1/3^^^3 chance of torture over a dust speck. Why? Because dust speck may result in an accident leading up to decades of torturous existence

IIRC, in the original torture vs specks post EY specified that none of the dust specks would have any long-term consequence.

Comment author: private_messaging 28 June 2012 12:58:24PM 0 points [-]

I know. Just wanted to point out where the personal preference (easily demonstrable when people e.g. neglect to take inconvenient safety measures) of small chance of torture vs definite dust speck comes from.

Comment author: Kaj_Sotala 25 June 2012 10:56:54PM 3 points [-]

This would deserve to be on the front page.

Comment author: army1987 26 June 2012 03:08:42AM *  4 points [-]

I agree.

ETA: Also, I expected a post with “(small)” in its title to be much shorter. :-)

Comment author: Stuart_Armstrong 26 June 2012 01:36:25PM *  1 point [-]

Well, it did start shorter, then more details just added themselves. Nothing to do with me! :-)

Comment author: Stuart_Armstrong 26 June 2012 12:30:39PM *  1 point [-]

Cheers, will move it.

Comment author: shminux 25 June 2012 05:08:47PM 3 points [-]

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so.

I dare to say that no self-professed "total utilitarian" actually aliefs this.

Comment author: Lukas_Gloor 25 June 2012 08:02:38PM *  3 points [-]

I know total utilitarians who'd have no problem with that. Imagine simulated minds instead of carbon-based ones. If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I'm not a total utilitarian, but I don't think that's a particularly problematic aspect of it.

My problem with total hedonistic utiltiarianism is the following: Imagine a planet full of beings living in terrible suffering. You have the choice to either euthanize them all (or just make them happy), or let them go on living forever, while also creating a sufficiently huge number of beings with lives barely worth living somewhere else. Now that I find unacceptable. I don't think you do anything good by bringing a happy being into existence.

Comment author: Dolores1984 26 June 2012 11:56:46PM 3 points [-]

If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I'm not a total utilitarian, but I don't think that's a particularly problematic aspect of it.

As someone who plans on uploading eventually, if the technology comes around... no. Still feels like murder.

Comment author: Will_Sawin 26 June 2012 10:16:18PM 2 points [-]

This is problematic. If bringing a happy being into existence doesn't do anything good, and bringing a neutral being into existence doesn't do anything bad, what do you do when you switch a planned neutral being for a planned happy being? For instance, you set aside some money to fund your unborn child's education at the College of Actually Useful Skills.

Comment author: Lukas_Gloor 26 June 2012 10:36:03PM *  0 points [-]

Good catch, I'm well aware of that. I didn't say that I think bringing a neutral being into existence is neutral. If the neutral being's life contains suffering, then the suffering counts negatively. Prior-existence views seem to not work without the inconsistency you pointed out. The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism. Which has its own repugnant conclusions (e.g. anti-natalism), but for several reasons I find those easier to accept.

Comment author: Stuart_Armstrong 27 June 2012 08:52:28AM 1 point [-]

The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism

As I said, any preferences that can be cast into utility function form are consistent. You seem to be adding extra requirements for this "consistency".

Comment author: Lukas_Gloor 27 June 2012 11:57:18AM *  -1 points [-]

I should qualify my statement. I was talking only about the common varieties of utilitarianism and I may well have omitted consistent variants that are unpopular or weird (e.g. something like negative average preference-utilitarianism). Basically my point was that "hybrid-views" like prior-existence (or "critical level" negative utiltiarianism) run into contradictions. Most forms of average utilitarianism aren't contradictory, but they imply an obvious absurdity: A world with one being in maximum suffering would be [edit:] worse than a world with a billion beings in suffering that's just slightly less awful.

Comment author: APMason 27 June 2012 01:07:58PM 1 point [-]

That last sentence didn't make sense to me when I first looked at this. Think you must mean "worse", not "better".

Comment author: Lukas_Gloor 27 June 2012 02:11:47PM -1 points [-]

Indeed, thanks.

Comment author: Stuart_Armstrong 27 June 2012 12:28:29PM 1 point [-]

I'm still vague on what you mean by "contradictions".

Comment author: Lukas_Gloor 27 June 2012 02:10:10PM 0 points [-]

Not in the formal sense. I meant for instance what Will_Savin pointed out above, a neutral life (a lot of suffering and a lot of happiness) being equally worthy of creating as a happy one (mainly just happiness, very little suffering). Or for "critical levels" (which also refers to the infamous dust specks), see section VI of this paper, where you get different results depending on how you start aggregating. And Peter Singer's prior-existence view seems to contain a "contradiction" (maybe "absurdity" is better) as well having to do with replaceability, but that would take me a while to explain. It's not quite a contradiction that the theory states "do X and not-X", but it's obvious enough that something doesn't add up. I hope that led to some clarification, sorry for my terminology.

Comment author: Will_Sawin 26 June 2012 10:38:55PM 1 point [-]

Ah, I see. Anti-natalism is certainly consistent, though I find it even more repugnant.

Comment author: jkaufman 26 June 2012 03:06:48AM 0 points [-]

Assuming perfection in the methods, ending N lives and replacing them with N+1 equally happy lives doesn't bother me. Death isn't positive or negative except in as much as it removes the chance of future joy/suffering by the one killed and saddens those left behind.

With physical humans you won't have perfect methods and any attempt to apply this will end in tragedy. But with AIs (emulated brains or fully artificial) it might well apply.

Comment author: private_messaging 28 June 2012 10:40:41AM *  2 points [-]

A more general problem with utilitarianisms including those that evade the critique in that article:

Suppose we have a computer running a brain sim (along with VR environment). The brain sim works as following: given current state, next state is calculated (using multiple cpus in parallel); the current state is read only, the next state is write only. Think arrays of synaptic values. After all of the next state is calculated, the arrays are switched and the old state data is written over . This is a reductionist model of 'living' that is rather easy to think about. Suppose that this being is reasonably happy.

We really want to sacrifice the old state for sake of the new state. If we are to do so based on maximizing utility (rather than seeing the update as a virtue in it's own right), the utility of new state data has to be greater than utility of the current state data. The utility has to keep rising with each simulator step. That's clearly not what anyone expects the utility to do. And it clearly has a lot of problems; e.g. when you have multiple brain sims, face a risk of hardware failure, and may want to erase some sim to use freed up memory as backup for some much older sim (whose utility grew over time to a larger value).

I'm very unconvinced that there even exist any 'utilitarian' solution here. If you want to maximize some metric over experience-moments that ever happen, then you need to keep track of the experience moments that already happened, to avoid re-doing it (you don't want to be looping sims over some happy moment). And it is still entirely immoral because you are going to want to destroy everything and create utilitronium.

Comment author: torekp 30 June 2012 12:03:32AM 3 points [-]

Why assume that utility is a function of individual states in this model, rather than processes? Can't a utilitarian deny that instantaneous states, considered apart from context, have any utility?

Comment author: private_messaging 30 June 2012 06:09:47AM *  2 points [-]

What is "processes" ? What's about not switching state data in above example? (You keep re-calculating same state from previous state; if it's calculation of the next state that is the process then the process is all right)

Also, at that point you aren't rescuing utilitarianism, you're going to some sort of virtue ethics where particular changes are virtuous on their own.

Bottom line is, if you don't define what is processes then you just plug in something undefined through which our intuitions can pour in and make it look all right even if the concept is still fundamentally flawed.

We want to overwrite the old state with the new state. But we would like to preserve old state in a backup if we had unlimited memory. It thus follows that there is a tradeoff decision between worth of old state, worth of new state, and cost of backup. You can proclaim that instantaneous states considered apart from context don't have any utility. Okay you have what ever context you want, now what are the utilities of the states and the backup, so that we can decide when to do the backup? How often to do the backup? Decide on optimal clock rate? etc.

Comment author: torekp 01 July 2012 03:20:07PM 1 point [-]

A process, at a minimum, takes some time (dt > 0). Calculating the next state from previous state would be a process. If you make backups, you could also make additional calculation processes working from those backed-up states. Does that count as "creating more people"? That's a disputed philosophy of mind question on which reasonable utilitarians might differ, just like anyone else. But if they do say that it creates more people, then we just have yet another weird population ethics question. No more and no less a problem for utilitarianism than the standard population ethics questions, as far as I can see. Nothing follows about each individual's life having to have ever-increasing utility lest putting that person in stasis be considered better.

Comment author: private_messaging 29 June 2012 05:39:15PM *  1 point [-]

I actually would be very curious as of any ideas how 'utilitarianism' could be rescued from this. Any ideas?

I don't believe direct utilitarianism works as a foundation; declaring that the intelligence is about maximizing 'utility' just trades one thing (intelligence) that has not been reduced to elementary operations but we at least have good reasons to believe it should be reducible (we are intelligent and laws of physics are, in relevant approximation, computable), for something ("utility") that not only hasn't been shown reducible but for which we have no good reason to think it is reducible or works on reductionist models (observe how there's suddenly a problem with utility of life once I consider a mind upload simulated in a very straightforward way; observe how number of paperclips in the universe is impossible or incredibly difficult to define as a mathematical function).

edit: Note: the model-based utility-based agent does not have real world utility function and as such, no matter how awesomely powerful is the solver it uses to find maximums of mathematical functions, won't ever care if it's output gets disconnected from the actuators, unless such condition was explicitly included into model; furthermore it will break itself if model includes itself and it is to modify the model, once again no matter how powerful is it's solver. The utility is defined within very specific non-reductionist model where e.g. a paperclip is a high level object, and 'improving' model (e.g. finding out that paperclip is in fact made of atoms) breaks utility measurement (it was never defined how to recognize when those atoms/quarks/what ever novel physics the intelligence came up with, constitute a paperclip). This is not a deficiency when it comes to solving practical problems other than 'how do we destroy mankind by accident'.

Comment author: timtyler 07 July 2012 10:02:38AM 1 point [-]

Why then is it so popular? Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers.

Surely that is not the reason. Firstly, utilitarianism is not that popular. My theory about why it has any adherents at all is that it is used for signalling purposes. One use of moral systems is to broadcast what a nice person you are. Utilitarianism is a super-unselfish moral system. So, those looking for a niceness superstimulus are attracted. I think this pretty neatly explains the 'utilitarianism' demographics.

Comment author: hairyfigment 26 June 2012 08:35:06PM 1 point [-]

I think I agree with your conclusion. But this:

to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich).

requires you to assume that the US or "the rich" have no relevant chance of producing vastly happier people in the future. This seems stronger than denying the singularity as such. And it makes targeted killing feel much more attractive to this misanthrope.

Comment author: Desrtopa 26 June 2012 06:24:41AM *  1 point [-]

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks.

Keep in mind that the people being brought into existence will be equally real people, with dreams, aspirations, grudges, and annoying or endearing quirks. If the people being killed had any more of what you value overall, then it wouldn't be a utility neutral act.

Imagine that a billion people are annihilated from existence, and replaced with exact copies who're indistinguishable in any way. Don't judge a person's plan to execute this, that could entail some sort of mistake, suppose that this simply happens, so we must judge it purely by its results. Do you think that this would be a bad thing?

If not, then presumably it's not the destruction and replacement you're objecting to in and of itself, you're implicitly assuming a higher utility value for the people who're destroyed than those who're created, or some chance of an outcome other than perfect replacement of all the people with equal utility people.

Comment author: private_messaging 25 June 2012 11:30:18PM *  1 point [-]

I like that article. I wrote some on other problem with utilitarianism.

Also, by the way, regarding the use of name of Bayes, you really should thoroughly understand this paper and also get some practice solving belief propagation approximately on not so small networks full of loops and cycles (or any roughly isomorphic problem), to form opinion on self described Bayesianists.

Comment author: orthonormal 28 June 2012 11:28:39PM 1 point [-]

A population of TDT agents with different mostly-selfish preferences should end up with actions that closely resemble total utilitarianism for a fixed population, but oppose the adding of people at the subsistence level followed by major redistribution. (Or so it seems to me. And don't ask me what UDT would do.)

Comment author: Mass_Driver 26 June 2012 11:53:31PM *  1 point [-]

It's a good and thoughtful post.

Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game - anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined "more equal" society?

I wonder if it makes sense to model a separate variable in the global utility function for "culture." In other words, I think the value I place on a hypothetical society runs something like., where x is each individual person's individual utility, and c is the overall cultural level.

A society where a million people each enjoy reading the Lord of the Rings but there are no other books would have high sigma[U(x)] and low U(c); a society where a hundred people each enjoy reading a unique book would have low total U(x) but high U(c).

That would help model the intuition that culture, even in the abstract, is worth trading off against individual happiness. I think I would prefer a Universe in which the Lord of the Rings were encoded into a durable piece of stone but otherwise had nothing else in it to a Universe in which there was a thriving colony of a few hundred cells of plankton but otherwise nothing else in it, even if there were nobody around to read the stone. Many economists would call that irrational -- but like the OP, I reject the premise that my individual utility function for the state of the world has to break down into other people's individual welfare.

Comment author: Nornagest 27 June 2012 12:25:09AM *  2 points [-]

I'll accept the intuition, but culture seems even harder to quantify than individual welfare -- and the latter isn't exactly easy. I'm not sure what we should be summing over even in principle to arrive at a function for cultural utility, and I'm definitely not sure if it's separable from individual welfare.

One approach might be to treat cultural artifacts as fractions of identity, an encoding of their creators' thoughts waiting to be run on new hardware. Individually they'd probably have to be considered subsapient (it's hard to imagine any transformation that could produce a thinking being when applied to Lord of the Rings), but they do have the unique quality of being transmissible. That seems to imply a complicated value function based partly on population: a populous world containing Lord of the Rings without its author is probably enriched more than one containing a counterfactual J.R.R. Tolkien that never published a word. I'm not convinced that this added value need be positive, either: consider a world containing one of H.P. Lovecraft's imagined pieces of sanity-destroying literature. Or your own least favorite piece of real-life media, if you're feeling cheeky.

Comment author: Lukas_Gloor 27 June 2012 05:35:09PM *  1 point [-]

How about a universe with one planet full of inaminate cultural artifacts of "great artistic value", and, on another planet that's forever unreachable, a few creatures in extreme suffering? If you make the cultural value on the artifact planet high enough, it would seem to justify the suffering on the other planet, and you'd then have to prefer this to an empty universe, or one with insentient plankton. But isn't that absurd? Why should creatures suffer lives not worth living just because somewhere far away are rocks with fancy symbols on it?

Comment author: Mass_Driver 28 June 2012 01:00:49AM 1 point [-]

Because I like rocks with fancy symbols on them?

I'm uncertain about this; maybe sentient experiences are so sacred that they should be lexically privileged over other things that are desirable or undesirable about a Universe.

But, basically, I don't have any good reason to prefer that you be happy vs. unhappy -- I just note that I reliably get happy when I see happy humans and/or lizards and/or begonias and/or androids, and I reliably get unhappy when I see unhappy things, so I prefer to fill Universes with happy things, all else being equal.

Similarly, I feel happy when I see intricate and beautiful works of culture, and unhappy when I read Twilight. It feels like the same kind of happy as the kind of happy I get from seeing happy people. In both cases, all else being equal, I want to add more of it to the Universe.

Am I missing something? What's the weakest part of this argument?

Comment author: TheOtherDave 28 June 2012 01:21:43AM 2 points [-]

So, now I'm curious... if tomorrow you discovered some new thing X you'd never previously experienced, and it turned out that seeing X made you feel happier than anything else (including seeing happy things and intricate works of culture), would you immediately prefer to fill Universes with X?

Comment author: Mass_Driver 28 June 2012 04:07:04AM 0 points [-]

I should clarify that by "fill" I don't mean "tile." I'm operating from the point of view where my species' preferences, let alone my preferences, fill less than 1 part in 100,000 of the resource-rich volume of known space, let alone theoretically available space. if that ever changed, I'd have to think carefully about what things were worth doing on a galactic scale. It's like the difference between decorating your bedroom and laying out the city streets for downtown -- if you like puce, that's a good enough reason to paint your bedroom puce, but you should probably think carefully before you go influencing large or public areas.

I would also wonder if some new thing made me incredibly happy if perhaps it was designed to do that by someone or something that isn't very friendly toward me. I would suspect a trap. I'd want to take appropriate precautions to rule out that possibility.

With those two disclaimers, though, yes. If I discovered fnord tomorrow and fnord made me indescribably happy, then I'd suddenly want to put a few billion fnords in the Sirius Sector.

Comment author: TheOtherDave 28 June 2012 01:34:10PM 2 points [-]

(nods) Makes sense.
If I offered to, and had the ability to, alter your brain so that something that already existed in vast quantities -- say, hydrogen atoms -- made you indescribably happy, and you had taken appropriate precautions to rule out the possibility that I wasn't very friendly towards you and that this wasn't a trap, would you agree?

Comment author: Mass_Driver 29 June 2012 12:02:06AM 0 points [-]

Sure! That sounds great. Thank you. :-)

Comment author: Lukas_Gloor 28 June 2012 02:41:47PM 0 points [-]

I'm operating from the point of view where my species' preferences, let alone my preferences, fill less than 1 part in 100,000 of the resource-rich volume of known space, let alone theoretically available space.

Do you think the preferences of your species matter more than preferences of some other species, e.g. intelligent aliens? I think that couldn't be justified. I'm currently working on a LW article about that.

Comment author: Mass_Driver 29 June 2012 12:05:14AM 0 points [-]

I haven't thought much about it! I look forward to reading your article.

My point above was simply that even if my whole species acted like me, there would still be plenty of room left in the Universe for a diversity of goods. Barring a truly epic FOOM, the things humans do in the near future aren't going to directly starve other civilizations out of a chance to get the things they want. That makes me feel better about going after the things I want.

Comment author: Lukas_Gloor 28 June 2012 02:46:05AM 0 points [-]

I think it's a category error to see ethics as only being about what one likes (even if that involves some work getting rid of obvious contradictions). In such a case, doing ethics would just be descriptive, it would tell us nothing new, and the outcome would be whatever evolution arbitrarily equipped us with. Surely that's not satisfying! If evolution had equipped us with a strong preference to generate paperclips, should our ethicists then be debating how to best fill the universe with paperclips? Much rather, we should be trying to come up with better reasons than mere intuitions and arbitrarily (by blind evolution) shaped preferences.

If there was no suffering and no happiness, I might agree with ethics just being about whatever you like, and I'd add that one might as well change what one likes and do whatever, since nothing then truly mattered. But it's a fact that suffering is intrinsically awful, in the only way something can be, for some first person point of view. Of pain, one can only want one thing: That it stops. I know this about my pain as certainly as I know anything. And just because some other being's pain is at another spatio-temporal location doesn't change that. If I have to find good reasons for the things I want to do in life, there's nothing that makes even remotely as much sense as trying to minimize suffering. Especially if you add that caring about my future suffering might not be more rational than caring about all future suffering, as some views on personal identity imply.

Comment author: TheOtherDave 28 June 2012 02:58:31AM 1 point [-]

(shrug) I agree that suffering is bad.
It doesn't follow that the only thing that matters is reducing suffering.

Comment author: Lukas_Gloor 28 June 2012 03:08:58AM 0 points [-]

But suffering is bad no matter your basic preference architecture. It takes the arbitrariness of out ethics when it's applicable to all that. Suffering is bad (for the first person point of view experiencing it) in all hypothetical universes. Well, by definition. Culture isn't. Biological complexity isn't. Biodiversity isn't.

Even if it's not all that matters, it's a good place to start. And a good way to see whether something else really matters too is to look whether you'd be willing to trade a huge amount of suffering for whatever else you consider to matter, all else being equal (as I did in the example about the planet full of artifacts).

Comment author: TheOtherDave 28 June 2012 03:50:15AM 2 points [-]

Yes, basically everyone agrees that suffering is bad, and reducing suffering is valuable. Agreed.

And as you say, for most people there are things that they'd accept an increase in suffering for, which suggests that there are also other valuable things in the world.

The idea of using suffering-reduction as a commensurable common currency for all other values is an intriguing one, though.

Comment author: Mass_Driver 28 June 2012 04:18:46AM 0 points [-]

In such a case, doing ethics would just be descriptive, it would tell us nothing new, and the outcome would be whatever evolution arbitrarily equipped us with

I used to worry about that a lot, and then AndrewCritch explained at minicamp that the statement "I should do X" can mean "I want to want to do X." In other words, I currently prefer to eat industrially raised chicken sometimes. It is a cold hard fact that I will frequently go to a restaurant that primarily serves torture-products, give them some money so that they can torture some more chickens, and then put the dead tortured chicken in my mouth. I wish I didn't prefer to do that. I want to eat Subway footlongs, but I shouldn't eat Subway footlongs. I aspire not to want to eat them in the future.

Also check out the Sequences article "Thou Art Godshatter." Basically, we want any number of things that have only the most tenuous ties to evolutionary drives. Evolution may have equipped me with an interest in breasts, but it surely is indifferent to whether the lace on a girlfriend's bra is dyed aquamarine and woven into a series of cardioids or dyed magenta and woven into a series of sinusoidal spirals -- whereas I have a distinct preference. Eliezer explains it better than I do.

I'm not sure "intriniscally awful" means anything interesting. I mean, if you define suffering as an experience E had by person P such that P finds E awful, then, sure, suffering is intrinsically awful. But if you don't define suffering that way, then there are at least some beings that won't find a given E awful.

Comment author: prase 25 June 2012 06:32:43PM 1 point [-]

Only a slightly relevant question which nevertheless I haven't yet seen addressed: If a utilitarian desires to maximise other people's utilities and the other people are utilitarians themselves, also deriving their utility from the utilities of others (the original utilitarian included), doesn't that make utilitarianism impossible to define? The consensus seems to be that one can't take one's own mental states for argument of one's own utility function. But utilitarians rarely object to plugging others' mental states into their utility functions, so the danger of circularity isn't avoided. Is there some clever solution to this?

Comment author: mwengler 05 July 2012 03:28:59PM 0 points [-]

I think you are on to something brilliant here. The thing that is new to me in your question is the recursive aspect of utilitarianism. A theory of morality that says the moral thing to to do is to maximize utility, clearly then maximizing utility is a thing that has utility.

From here in an engineering sense, you'd have at least two different places you could go. A sort of naive place to go would be to try to have each person maximize total utility independently of what others are doing, noting that other people's utility summed up is much larger than one's own utility. Then to a very large extent your behavior will be driven by maximizing other people's utility. In a naive design involving say 100 utilitarians, one would be "over-driving" the system by ~100 x, if each utilitarian was separately calculating everybody else's utility and trying to maximize it. In some sense, it would be like a feedback system with way too much gain: 99 people all trying to maximize your utility.

An alternative place to go would be to say utility is a meta-ethical consideration, that an ethical system should have the property that it maximizes total utility. But then from engineering considerations you would expect 1) you would have lots of different rule systems that would come close to maximizing utility and 2) among the simplest and most effective would be to have each agent maximizing its own utility under the constraint of rules which were designed to get rid of anti-synergistic effects and to enhance synergistic effects. So you would expect contract law, anti-fraud law, laws against bad externalities, laws requiring participation in good externalities. But in terms of "feedback," each agent in the system would be actively adjusting to maximize its own utility within the constraints of the rules.

This might be called rule-utilitarianism, but really I think it is a hybrid of rule utilitarianism and justified selfishness (Rand's egoism? Economics "homo economicus" rational utility maximizer?). It is a hybrid because you don't ONLY have rules which maximize utility, and you don't ONLY have maximizing individual utility as the moral rule.

Comment author: novalis 25 June 2012 06:39:37PM 0 points [-]

No, because a utilitarianism does not specify a utilitarian's desires; it specifies what they consider moral. There are lots of things we desire to do that aren't moral, and that we choose not to do because they are not moral.

Comment author: prase 25 June 2012 07:49:54PM 3 points [-]

I believe this doesn't answer my question; I will reformulate the problem in order to remove potentially problematic words and make it more specific:

Let the world contain at least two persons, P1 and P2 with utility functions U1 and U2. Both are traditional utilitarians: they value happiness of the others. Assume that U1 is a sum of two terms: H2 + u1(X), where H2 is some measure of happiness of P2 and u1(X) represents P1's utility unrelated to P2's happiness, X is the state of the rest of the world; similarly U2 = H1 + u2(X). (H1 and H2 are monotonous functions of happiness but not necessarily linear - whatever it would even mean - so having U as linear function of H is still quite general.)

Also, as for most people, the happiness of the model utilitarians is correlated with their utility. Let's again assume that the utilities decompose into sums of independent terms such that H1 = h1(U1) + w1(X), where w contains all non-utility sources of happiness and h1(.) is a growing function; similarly for the second agent.

So we have:

  • U1 = h2(U2) + w2(X) + u1(X)
  • U2 = h1(U1) + w1(X) + u2(X)

Whether this does or doesn't have solution (for U1 and U2) depends on details of h1, h2, u1, u2, w1, w2 and X. But what I say is that the system of equations is a direct analogue of the forbidden

  • U = h(U) + u(X)

i.e. when one's utility function takes itself for an argument.

Comment author: endoself 25 June 2012 08:39:57PM 3 points [-]

Also, as for most people, the happiness of the model utilitarians is correlated with their utility.

This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.

Comment author: prase 25 June 2012 11:38:24PM 1 point [-]

Yes, I may not know the exact value of my utility since I don't know the value of every argument it takes, and yes, there are consequently changes in utility which aren't accompanied with corresponding changes in happiness, but no, this doesn't mean that utility and happiness aren't correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn't the case.

Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent's utilities.

Comment author: mwengler 05 July 2012 03:34:19PM 0 points [-]

There may be many people who's utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.

Comment author: endoself 06 July 2012 02:25:27AM 0 points [-]

I'm not sure exactly why prase disagrees with me - I can think of many mutually exclusive reasons that it would take a while to write out individually - but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?

Comment author: novalis 26 June 2012 02:54:22AM -1 points [-]

Here's another way to look at it:

Imagine that everyone starts at time t1 with some level of utility, U[n]. Now, they generate a utility based on their beliefs about the sum of everyone else's utility (at time t1). Then they update by adding some function of that summed (averaged, whatever) utility to their own happiness. Let's assume that function is some variant of the sigmoid function. This is actually probably not too far off from reality. Now we know that the maximum happiness (from the utility of others) that a person can have is one (and the minimum is negative one). And assuming that most people's base level of happiness is somewhat larger than the effect of utility, this is going to be a reasonably stable system.

This is a much more reasonable model, since we live in a time-varying world, and our beliefs about that world change over time as we gain more information.

Comment author: prase 26 June 2012 04:39:23PM 1 point [-]

When information propagates fast relative to the rate of change of external conditions, the dynamic model converges to the stable point which would be the solution of the static model - are the models really different in any important aspect?

Instability is indeed eliminated by use of sigmoid functions, but then the utility gained from happiness (of others) is bounded. Bounded utility functions solve many problems, the "repugnant conclusion" of the OP included, but some prominent LWers object to their use, pointing out scope insensitivity. (I have personally no problems with bounded utilities.)

Comment author: novalis 26 June 2012 05:46:51PM -1 points [-]

Utility functions need not be bounded, so long as their contribution to happiness is bounded.

Comment author: Strange7 17 September 2012 10:32:48AM 0 points [-]

Another problem with the repugnant conclusion is economic: it assumes that the cost of creating and maintaining additional barely-worth-living people is negligibly small.

Comment author: Gust 19 July 2012 01:53:32AM 0 points [-]

And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular IUC method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.

This interests me. Do you have any literature I should read on this topic?

Comment author: timtyler 07 July 2012 10:09:23AM 0 points [-]

This one left me wondering - is "population ethics" any different from "politics"?

Comment author: Danfly 07 July 2012 11:08:14AM 0 points [-]

Interesting point, but I would say there are areas of politics that don't really come under "ethics". "What is currently the largest political party in the USA?" is a question about politics and demographics, but I wouldn't call it a question of population ethics. I'd say that you could probably put anything from the subset of "population ethics" into the broad umbrella of "politics" though.

Comment author: timtyler 07 July 2012 09:57:41AM 0 points [-]

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

Other members of society typically fail to approve of murder, and would apply sanctions to the utilitarian - probably hindering them in their pursuit of total utility. So, in practice, a human being pursuing total utilitarianism would simply not act in this way.

Comment author: drnickbone 27 June 2012 01:21:02PM *  0 points [-]

Good article! Here are a few related questions:

  1. The problem of comparing different people's utility functions applies to average utilitarianism as well, doesn't it? For instance if your utility function is U and my utility function is V, then the average could be (U + V)/2 : however utility functions can be rescaled by any linear function, so let's make mine 1000000 x V. Now the average is U/2 + 500000 x V, which seems totally fair doesn't it? Is the right solution here to assume that each person's utility has a "best possible" case, and a "worst possible" case, and to rescale, assigning 1 to each person's best case, and 0 to their worst? That works fine if people have bounded utility, which we apparently do (it's one reason we don't fall for Pascal muggings).

  2. It's true that no-one optimises utility perfectly, but even animals, plants and bacteria have an identifiable utility function (inclusive fitness), which they optimise pretty well. Why shouldn't people? And, to first approximation, why wouldn't a human's utility function also be inclusive fitness? (We can add other approximations as necessary, e.g. some sort of fitness function for culture or memes.)

  3. Do you think utility functions should be defined over "worlds" or "states"? Decision theory only requires worlds, but consequentialism seems to require states. For instance if each world w consists of a sequence of states <s(t)> indexed by time t, then a consequentialist utility function applied to a whole world would look like U(w) = Sum d(t) x u(s(t)) where d(t) is the discount factor, and u is the utility function applied to states. Deontologists would have a completely different sort of U, but they are not immediately irrational because of that. (Seems they can still be consistent with formal decision theory.)

  4. Looking at your paper on Anthropic Decision Theory, what do you think will happen if we adopt a compromise utility function somewhere between average and total utility, much as you suggest? Is the result more like SIA or SSA? Does it contain some of the strengths of each while avoiding their weaknesses? (It strikes me that the result is more like SSA, since you are avoiding the "large" utilities from total utility dominating the calculation, but I haven't tried to do the math, and wondered if you already had...)

  5. Do you have views on "rule" versus "act" utilitarianism? It seems to me that advanced decision theories like TDT, UDT or ADT are already invoking a form of rule utilitarianism, right? Further that rule utilitarianism is a better "model" for our moral judgements than act utilitarianism.

Comment author: Pentashagon 26 June 2012 07:31:36PM *  -3 points [-]

In Austrian economics using the framework of Praxiology the claim is made that preferences (the rough equivalent of utilities) cannot be mapped to cardinal values but different states of the world are still well ordered by an individual's preferences such that one world state can be said to be more or less desirable than another world state. This makes it impossible to numerically compare the preferences of two individuals except through the pricing/exchange mechanism of economics. E.g. would 1 billion happy people exchange their own death for the existence of 1 billion and 1 new happy people? To answer the question simply ask them and what they would do or observe what they do in that situation and that will reveal their preferences.

Taking a preference-based approach, consider the set of all individuals and the set of all world-states. Each individual has a well-ordered list of preferences of possible world states, with the only restriction that it be bounded from above by a maximal preference. In every world state the next world state is chosen by all individuals voting for the most preferable next reachable world state. In a majority voting system each individual votes for its maximally-preferred world state. In runoff and approval voting the first, second, third, etc. choices are the highest, next-highest, etc. ranked preferences for world states, respectively. Ethics thus reduces to the problem of fair voting.

An obvious criticism of Austrian economics is that it simply describes the economy as the result of all individual actions (with which individuals reveal their true preferences, by definition) with no additional predictive power. I think that by contrasting the theoretical results of perfect ethical preference voting with the theoretical results of perfectly calculating a utilitarian theory, there may be some insight. The basic difference is that economics relies on pricing to build an economy but in ethics we can cheat and ask theoretical questions about all possible world states.

Potential or hypothetical individuals would have their own preferences for world-states but, as the article mentions, their preferences may not be compatible with the set of possible next world states that we are voting on If those hypothetical individuals never have enough votes for possible next world states then they will never have any influence. Individuals currently sleeping, anesthetized, or frozen in liquid nitrogen have hypothetical preferences for future world states that may very well coincide with our preferences for future world states, and therefore they have a greater chance of existing as acting individuals in our future world. Ultimately in any ethical theory it is only our estimation of a hypothetical being's preferences that we can consider, so their preferences are subsumed into our own preferences.

3^^^3 people will probably rank their preferences for world states with and without a single dust speck in their eye as nearly indistinguishable, but world states with torture are hopefully quite lower in rank than equivalent world states without torture. The one person whose torture depends on the vote may prefer world states with 3^^^3 dust specks far more than world states with 50 years of torture, but their vote clearly doesn't matter in any conceivable voting system. Nevertheless, so long as the existence of torture is more repugnant than a single dust speck, 3^^^3 people will vote to receive the dust speck instead of allow that individual to be tortured.

Populations of nearly any size will probably not vote to replace themselves with a different population (whether of humans, paperclips, or smily-faces).

There are still problems: Bacteria and parasites may deserve a vote. Weighting may fix that problem. Hated minorities are still at a disadvantage even in the fairest voting systems. On one hand if people are not personally inconvenienced by the actions of a hated minority they will probably prefer worlds where that minority is not tortured over worlds where they are tortured, simply because of their general aversion to torture. On the other hand a large number of voters in democratic countries have not kicked torturers out of political office. This is distressing because far fewer than 3^^^3 people have been affected by, say, Maher Arar. There is apparently a tendency in humans to have a preference for the brutal punishment of an assumed criminal even if there is only a tiny marginal chance of value to themselves. I think this is a failure of rationality and probably not a failure of any particular ethical system.

The primary difference between additive functions of individual utility and preference voting is that the most important effects on individuals have the largest influence on their preferences. There is no correspondence to having a maximal utility for not having a speck in one's eye and also a maximal utility for torturing another individual. One or the other will have strictly greater preference. In approval or runoff voting the preference for torturing another individual will fall behind a series of other more pleasant preferences unless that individual actually had a major effect on the voter. In effect everyone is forced to vote for what really matters to them instead of arbitrarily ruining another individual's life for no appreciable benefit. Utilitarianism could conceivably have exactly the same ranking of values as preference voting (just enumerate all the N preferences and assign them values of i/N from least to most preferred) but there is no guarantee that an individual faced with choosing utilities would assign the same relative value to world states as an individual choosing preferences.

It appears that Eliezer went down this road a ways in http://lesswrong.com/lw/rx/is_morality_preference/ and then went off in another direction before enumerating the idea of what would happen if everyone voted based on their preferences and then acted to achieve the winning world state instead of acting to achieve only their own maximally preferred world state.

Comment author: torekp 30 June 2012 01:18:06AM 0 points [-]

I have no interest in defending utilitarianism, but I do have an interest in a total welfare (yes I think such a concept can make sense) of sentient beings. The repugnance of the Repugnant Conclusion, I suggest, is a figment of your lack of imagination. When you imagine a universe with trillions of people whose lives are marginally worth living, you probably imagine people whose lives are a uniform grey, just barely closer to light than darkness. In other words, agonizingly boring lives. But this is unnecessary and prejudicial. Instead, imagine people with ups and downs like ours, but with a closer balance of ups and downs. Imagine rich cultures, intense personal relationships, exciting mathematical discoveries, etc., etc. - but perhaps more repression, more romantic breakups, more dead end derivations.

Perhaps there are values that are nonlocal, in the sense of not belonging to any one person and not being the sum of values belonging each to one person. And the Repugnant world you're imagining may lack those values. But that's a problem with Utilitarianism, not with Totality. In other words I suggest that insofar as moral value depends on how things go for individuals (considering individuals other than those to whom you have special obligations), it depends on the total rather than the average, or the pre-existing persons' welfare.

Why think so? Because having children normally isn't wrong, but having children when you know the child will only suffer horribly for a year and then die, is. Normal parents know, however, that there is a very slight chance of that type of horrible result. What justifies childbearing in the normal case? The obvious answer is the high probability that the child will lead a good life. Therefore, adding more good lives is a good-making feature. This doesn't show that adding more good lives is comparable to making the same number of pre-existing equally good lives twice as good - but I think that is the answer most coherent with the meager truth about personal identity.

Other comments: The "real world analogues" of the killing-and-replacing-people thought experiments turn out to be just more thought experiments. Not that there need be anything wrong with that, but the weirdness of the thought-experimental situation should be considered. If the intent is simply to show that utilitarianism faces a burden of argument in the face of counterintuitive results, it succeeds in that easy goal.

Interpersonal comparisons of utility suffer from the same difficulties, in principle, as intrapersonal comparisons. They're just a lot more intense in the former case. This applies to both preference and hedonics, and also to many more sophisticated evaluative schemes which may include both, plus more.