You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Integral vs differential ethics, continued

6 Stuart_Armstrong 03 August 2015 01:25PM

I've talked earlier about integral and differential ethics, in the context of population ethics. The idea is that the argument for the repugnant conclusion (and its associate, the very repugnant conclusion) is dependent on a series of trillions of steps, each of which are intuitively acceptable (adding happy people, making happiness more equal), but reaching a conclusion that is intuitively bad - namely, that we can improve the world by creating trillions of people in torturous and unremitting agony, as long as balance it out by creating enough happy people as well.

Differential reasoning accepts each step, and concludes that the repugnant conclusions are actually acceptable, because each step is sound. Integral reasoning accepts that the repugnant conclusion is repugnant, and concludes that some step along the way must therefore be rejected.

Notice that key word, "therefore". Some intermediate step is rejected, but not for intrinsic reasons, but purely because of the consequence. There is nothing special about the step that is rejected, it's just a relatively arbitrary barrier to stop the process (compare with the paradox of the heap).

Indeed, things can go awry when people attempt to fix the repugnant conclusion (a conclusion they rejected through integral reasoning) using differential methods. Things like the "person-affecting view" have their own ridiculousness and paradoxes (it's ok to bring a baby into the world if it will have a miserable life; we don't need to care about future generations if we randomise conceptions, etc...) and I would posit that it's because they are trying to fix global/integral issues using local/differential tools.

The relevance of this? It seems that integral tools might be better suited to deal with the bad convergence of AI problem. We could set up plausibly intuitive differential criteria (such as self-consistency), but institute overriding integral criteria that can override these if they go too far. I think there may be some interesting ideas in that area, potentially. The cost is that integral ideas are generally seen as less elegant, or harder to justify.

Potential vs already existent people and aggregation

2 Stuart_Armstrong 04 December 2014 01:38PM

EDIT: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong. Simply that the case for existing people is stronger than for potential people.

Consider the following choices:

  • You must choose between torturing someone for 50 years, or torturing 3^^^3 people for a millisecond each (yes, it's a more symmetric variant on the dust-specks vs torture problem).
  • You must choose between creating someone who will be tortured for 50 years, or creating 3^^^3 people who will each get tortured for a millisecond each.

Some people might feel that these two choices are the same. There are some key differences between them, however - and not only because the second choice seems more underspecified than the first. The difference is the effect of aggregation - of facing the same choice again and again and again. And again...

There are roughly 1.6 billion seconds in 50 years (hence 1.6 trillion milliseconds in 50 years). Assume a fixed population of 3^^^3 people, and assume that you were going to face the first choice 1.6 trillion times (in each case, the person to be tortured is assigned randomly and independently). Then choosing "50 years" each time results in 1.6 trillion people getting tortured for 50 years (the chance of the same person being chosen to be tortured twice is of the order of 50/3^^^3 - closer to zero than most people can imagine). Choosing "a millisecond" each time results in 3^^^3 people, each getting tortured for (slightly more than) 50 years.

The choice there is clear: pick "50 years". Now, you could argue that your decision should change based on how often you (or people like you) expects to face the same choice, and assumes a fixed population of size 3^^^3, but there is a strong intuitive case to be made that the 50 years of torture is the way to go.

Compare with the second choice now. Choosing "50 years" 1.6 trillion times results in the creation of 1.6 trillion people who get tortured for 50 years. The "a millisecond" choice results in 1.6 trillion times 3^^^3 people being created, each tortured for a millisecond. Conditional on what the rest of the life of these people is like, many people (including me) would feel the "a millisecond" option is much better.

As far as I can tell (please do post suggestions), there is no way of aggregating impacts on potential people you are creating, in the same way that you can aggregate impacts on existing people (of course, you can first create potential people, then add impacts to them - or add impacts that will affect them when they get created - but this isn't the same thing). Thus the two situations seem justifiably different, and there is no strong reason to assign the intuitions of the first case to the second.

Integral versus differential ethics

9 Stuart_Armstrong 01 December 2014 06:04PM

In population ethics...

Most people start out believing that the following are true:

  1. That adding more happy lives is a net positive.
  2. That redistributing happiness more fairly is not a net negative.
  3. That the repugnant conclusion is indeed repugnant.

Some will baulk on the first statement on equality grounds, but most people should accept those three statements as presented. Then they find out about the mere addition paradox.

Someone who then accepts the repugnant could then reason something like this:

Adding happy people and redistributing fairly happiness, if done many, many times, in the way described above, will result in a repugnant conclusion. Each step along the way seems solid, but the conclusion seems wrong. Therefore I will accept the repugnant conclusion, not on its own merits, but because each step is clearly intuitively correct.

Call this the "differential" (or local) way or reasoning about population ethics. As long as each small change seems intuitively an improvement, then the global change must also be.

Adding happy people and redistributing fairly happiness, if done many, many times, in the way described above, will result in a repugnant conclusion. Each step along the way seems solid, but the conclusion seems wrong. Therefore I will reject (at least) one step, not on its own merits, but because the conclusion is clearly intuitively incorrect.

Call this the "integral" (or global) way of reasoning about population ethics. As long as the overall change seems intuitively a deterioration, then some of the small changes along the way must also be.

 

In general...

Now, I personally tend towards integral rather than differential reasoning on this particular topic. However, I want to make a more general point: philosophy may be over dedicated to differential reasoning. Mainly because it's easy: you can take things apart, simplify them, abstract details away, and appeal to simple principles - and avoid many potential biases along the way.

But it's also a very destructive tool to use in areas where concepts are unclear and cannot easily be made clear. Take the statement "human life is valuable". This can be taken apart quite easily, critiqued from all directions, its lack of easily described meaning its weakness. Nevertheless, integral reasoning is almost always applied: something called "human life" is taken to be "valuable", and many caveats and subdefinitions can be added to these terms without changing the fundamental (integral) acceptance of the statement. If we followed the differential approach, we might end up with the definition of "human life" as "energy exchange across a neurone cell membrane" or something equally ridiculous but much more rigorous.

Now, that example is a parody... but only because no-one sensible does that, we know that we'd lose too much value from that kind of definition. We want to build an extensive/integral definition of life, using our analysis to add clarity rather than simplify to a few core underlying concepts. But in population ethics and many other cases, we do feel free to use differential ethics, replacing vague overarching concepts with clear simplified versions that clearly throw away a lot of the initial concept.

Maybe we do it too much. To pick an example I disagree with (always a good habit), maybe there is such a thing as "society", for instance, not simply the total of individuals and their interactions. You can already use pretty crude consequentialist arguments with "societies" as agents subject to predictable actions and reactions (social science does it all the time), but what if we tried to build a rigorous definition of society as something morally valuable, rather than focusing on individual?

Anyway, we should be aware when, in arguments, we are keeping the broad goal and making the small steps and definitions conform to it, and when we are focusing on the small steps and definitions and following them wherever they lead.

Population ethics and utility indifference

3 Stuart_Armstrong 24 November 2014 03:18PM

It occurs to me that the various utility indifference approaches might be usable in population ethics.

One challenge for non-total utilitarians is how to deal with new beings. Some theories - average utilitarianism, for instance, or some other systems that use overall population utility - have no problem dealing with this. But many non-total utilitarians would like to see creating new beings as a strictly neutral act.

One way you could do this is by starting with a total utilitarian framework, but subtracting a certain amount of utility every time a new being B is brought into the world. In the spirit of utility indifference, we could subtract exactly the expected utility that we expect B to enjoy during their life.

This means that we should be indifferent as to whether B is brought into the world or not, but, once B is there, we should aim to increase B's utility. There are two problems with this. The first is that, strictly interpreted, we would also be indifferent to creating people with negative utility. This can be addressed by only doing the "utility correction" if B's expected utility is positive, thus preventing us from creating beings only to have them suffer.

The second problem is more serious. What about all the actions that we could do, ahead of time, in order to harm or benefit the new being? For instance, it would seem perverse to argue that buying a rattle for a child after they are born (or conceived) is an act of positive utility, whereas buying it before they were born (or conceived) would be a neutral act, since the increase in expected utility for the child is cancel out by the above process. Not only is it perverse, but it isn't timeless, and isn't stable under self modification.

continue reading »

If interventions changing population size are cheap, they may be the best option independent of your population ethics

6 ericyu3 13 August 2014 03:03AM

In this post I'll explain why you might want to assist altruistic interventions that change the size of the world population regardless of how valuable you think additional lives are. The argument relies on a combination of 2 population-changing interventions that combine to produce the effect of a non-population-changing intervention, but at a lower cost.

Suppose you can donate to the following 3 interventions:

  • "Growth": increase one future person's income from $500/yr to $5,000/yr for $10,000
  • "Plus": cause one more person to be born in a middle-income country (income ~$5,000/yr) for $6,000
  • "Minus": cause one less person to be born in a poor country (income ~$500/yr) for $1,000
Assume that the interventions are independent, and that donating multiples of the cost produces multiples of the effect without diminishing returns.

The cost estimates are completely made up; the point of this post is to explain what happens if the total cost of Plus and Minus is less than the cost of Growth. The cost of Plus is probably least well-known, since it's the least popular of the 3. Also, in the real world, you would probably want to spread the impact of $10,000 across at least several people instead of increasing one person's income by 10x, but I think the post makes more sense this way. If you know a more reasonable estimate for the costs, please post them!

If you donate to Plus and Minus, the total effect is the same as the effect of Growth in many ways - in the future, there is one more person with income $5,000, one less person with income $500, and the size of the world population remains the same. In my last post, I asked about whether consequentialists actually view the two outcomes as equivalent, and people seemed to think yes, so it's reasonable to say that Plus+Minus is just as beneficial as Growth. But Plus+Minus only costs $7,000 while Growth costs $10,000, so regardless of your population ethics, you should prefer donating to Plus+Minus.

But unless your population ethics are "fine-tuned" to make Plus and Minus equally cost-effective, one of them will be clearly better (more cost-effective) than the other. If you think Minus is better than Plus, then Minus is better than Plus+Minus, which is better than Growth, so you should donate exclusively to Minus. The same argument applies if you think Plus is better than Minus. If you donate to only one of Plus and Minus, you will change the size of the world population. So this seems to show that if population-changing interventions are cheap, you should act to change population size regardless of what you think about population ethics. Even if you are very uncertain what the value of a new life is, you can still use your best guess to decide between Plus and Minus as long as you are risk-neutral about how much good you do. 

Numerical example: suppose that Growth yields 100 "points" of benefit, where "point" is an arbitrary unit. Then regardless of population ethics, Plus+Minus yields 100 points as well. How these points are distributed between Plus and Minus depends on your population ethics, however. If you are a total utilitarian, you might say that Minus is worth -20 points and Plus is worth 120 points, and if you're a negative utilitarian, you might say that Minus is worth 150 points and Plus -50 points. If you're an average utilitarian, you might say that Minus is worth 70 and Plus is worth 30. But these all sum up to 100, and they would all choose Plus or Minus over Growth: Plus for the total utilitarian and Minus for the others.

What might be wrong with this reasoning? I can think of a few things:
  1. Plus+Minus is more costly than Growth in reality (quite likely)
  2. Growth and Plus+Minus are actually not equivalent, since Growth actually helps a particular person (again, see my last post)
I'm really curious about what the costs of economic-growth and population interventions are. I'd guess that population interventions would be competitive with unconditional cash transfer programs like GiveDirectly, but I don't know that much about their effectiveness, and I don't know whether there are economic interventions that are more cost-effective than cash transfers. Here are some population interventions that can be done or funded by individuals:
  • Education about contraception
  • Having children yourself (cost varies from person to person)
  • Paying others to have children
  • Subsidizing contraception
  • Subsidizing surrogacy (there are replaceability issues here, but I couldn't find any estimates of supply/demand elasticity)
  • Being a surrogate yourself (doesn't cost you any money, but can be unpleasant, so the cost varies from person to person)
Have people made estimates of how cost-effective these are? The Plus+Minus vs. Growth hypothetical doesn't work if Growth is actually cheaper, so I want to know if I'm thinking too much about something irrelevant!

 

Population ethics in practice

3 ericyu3 08 August 2014 10:40PM

There are many different ideas about how utilitarians should value the number of future people. Unfortunately, it is difficult to take all of them into account when deciding among public policies, charities, etc. Arguments about principles like total utilitarianism, average utilitarianism, critical-level utilitarianism, etc. often come from a "global" perspective:

  • Does the principle imply that we should have a very large population with a very low quality of life? (Repugnant Conclusion)
  • If average utility is negative, does the principle imply that it's good to add additional people with slightly less negative utility? (Sadistic Conclusion)
  • Is adding additional people valuable when the population is small, but less valuable when it is large? If so, how large does a population have to be to be considered "large"? ("diminishing marginal value" of people)
What these thought experiments have in common is that they aren't very good for making decisions. For instance, simply adding the condition "avoid the Repugnant Conclusion" to a cost-benefit analysis isn't very useful, since it doesn't give any concrete estimate of the value of additional lives. In this post, I'll give an heuristic that lets total, average, and critical-level utilitarianism be analyzed the same way for most decisions. For simplicity, I'll assume that everyone is identical; if people aren't identical, you need to explicitly normalize utility functions before comparing them, but as long you do that, the heuristic is still valid.

Suppose you have N people with utilities u1, ..., uN, and average utility uavg. Total utilitarianism (TU) would maximize the objective function wTU(N, uavg) = N*uavg. Average utilitarianism (AU) would maximize wAU(N, uavg) = uavg, and critical-level utilitarianism would maximize wCLU(N, uavg) = N*(uavg  u0) for some "critical utility" u0. The interpretation is that only lives with utility above u0 are worth living.

It is easy to use CLU in a cost-benefit analysis: creating an additional person with utility u is equally valuable as raising the utility of an existing person from u0 to u. For example, if utility is estimated using income, and $1000/year is the income level corresponding to u0, then creating a person with an income of $2000/year is about as good as doubling the income of someone making $1000/year. TU is the special case of CLU with u= 0, but if there is disagreement about what "zero utility" means, you can estimate the corresponding income level to estimate the magnitude of the disagreement - disagreement between $400 and $500/year is a lot less serious than between $400 and $40000/year.

In general, AU is not a special case of CLU: CLU's objective function is affected by pure changes in population, while AU's is not (∂wCLU/∂N != 0, unless uavg u0). However, for small changes in N and uavg, AU is equivalent to CLU with uuavg. So although AU and CLU are very different "globally", they are equivalent "locally" with the right choice of u0.

How small is a small change? Define the relative value of two choices as r=(change in w under Choice 1)/(change in w under Choice 2). If > 1, Choice 1 is better, and if r < 1, Choice 2 is better. Then the discrepancy between AU and CLU is indicated by rAU / rCLU: if AU favors Choice 1 more than CLU does, this ratio will be larger. As it turns out, rAU / rCLU ≈ 1 - (ΔN / N) to first order in ΔN. If the population is 1% higher under Choice 1 than Choice 2, the discrepancy is only 1%, and as long as r is not extremely close to 1, AU and CLU will agree on which one is better.

But 1% of the world population is 70 million people, and virtually no policy will have that large of an effect. So when applying population ethics to real decisions, I think it's best to act as if CLU is true, and frame disagreements as disagreements about the right value of u0, and which income level corresponds to it. That way, it's much easier to see the practical implications of your viewpoint, and people who disagree in principle may find that they agree in practice about what u0 should be, and therefore about how to choose the best policy/charity/cause/etc. The main exception is existential risk prevention, where success will change the population by a very large amount.

PDF with detailed derivations (uses slightly different notation): https://drive.google.com/file/d/0B-zh2f7_qtukMFhNYkR4alRsSFk/edit?usp=sharing

Embracing the "sadistic" conclusion

10 Stuart_Armstrong 13 February 2014 10:30AM

This is not the post I was planning to write. Originally, it was going to be a heroic post where I showed my devotion to philosophical principles by reluctantly but fearlessly biting the bullet on the sadistic conclusion. Except... it turns out to be nothing like that, because the sadistic conclusion is practically void of content and embracing it is trivial.

Sadism versus repugnance

The sadistic conclusion can be found in Gustaf Arrhenius's papers such as "An Impossibility Theorem for Welfarist Axiologies." In it he demonstrated that - modulo a few technical assumptions - any system of population ethics has to embrace either the Repugnant Conclusion, the Anti-Egalitarian Conclusion or the Sadistic conclusion. Astute readers of my blog posts may have noticed I'm not the repugnant conclusion's greatest fan, evah! The anti-egalitarian conclusion claims that you can make things better by keeping total happiness/welfare/preference satisfaction constant but redistributing it in a more unequal way. Few systems of ethics embrace this in theory (though many social systems seem to embrace it in practice).

Remains the sadistic conclusion. A population ethics that accepts this is one where it is sometimes better to create someone whose life is not worth living (call them a "victim"), rather a group of people whose lives are worth living. It seems well named - can you not feel the top hatted villain twirl his moustache as he gleefully creates lives condemned to pain and misery, laughing manically as he prevents the intrepid heroes from changing the settings on his incubator machine to "worth living"? How could that sadist be in the right, according to any decent system of ethics?

Remove the connotations, then the argument

But the argument is flawed, for two main reasons: one that strikes at the connotations of "sadistic", the other at the heart of the comparison itself.

The reason the sadistic aspect is a misnomer is that creating a victim is not actually a positive development. Almost all ethical systems would advocate improving the victim's life, if at all possible (or ending it, if appropriate). Indeed some ethical systems which have the "sadistic conclusion" (such as prioritarianism or egalitarianism) would think it more important to improve the victim's life that some ethical systems that don't have the conclusion (such as total utilitarianism). Only if such help is somehow impossible do you get the conclusion. So it's not a gleeful sadist inflicting pain, but a reluctant acceptance that "if universe conspires to prevent us from helping this victim, then it still may be worth creating them as the least bad option" (see for instance this comment).

"The least bad option." For the sadistic conclusion is based on a trick, contrasting two bad options and making them seem related (see this comment). Consider for example whether it is good to create a large permanent underclass of people with much more limited and miserable lives than all others - but whose lives are nevertheless just above some complicated line of "worth living". You may or may not agree that this is bad, but many people and many systems of population ethics do feel it's a negative outcome.

Then, given that this underclass is a bad outcome (and given a few assumptions as to how outcomes are ranked) then we can find other bad outcomes that are not quite as bad as this one. Such as... a single victim, a tiny bit below the line of "worth living". So the sadistic conclusion is not saying anything about the happiness level of a single created population. It's simply saying that sometime (A) creating underclasses with slightly worthwhile lives can sometimes be bad, while (B) creating a victim can sometimes be less bad. But the victim isn't playing a useful role here: they're just an example of a bad outcome better than (A), only linked to (A) through superficial similarity and rhetoric.

For most systems of population ethics the sadistic conclusion can thus be reduced to "creating underclasses with slightly worthwhile lives can sometimes be bad." But this is the very point that population ethicists are disputing each other about! Wrapping that central point into a misleading "sadistic conclusion" is... well, the term "misleading" gave it away.

Weak repugnant conclusion need not be so repugnant given fixed resources

6 Stuart_Armstrong 17 November 2013 03:44PM

I want to thank Irgy for this idea.

As people generally know, total utilitarianism leads to the repugnant conclusion - the idea that no matter how great a universe X would be, filled without trillions of ultimately happy people having ultimately meaningful lives filled with adventure and joy, there is another universe Y which is better - and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure. But since the second universe is much bigger than the first, it comes out on top. Not only in that if we had Y it would be immoral to move to X (which is perfectly respectable, as doing so might involve killing a lot of people, or at least allowing a lot of people to die). But in that, if we planned for our future world now, we would desperately want to bring Y into existence rather than X - and could run great costs or great risks to do so. And if we were in world X, we must at all costs move to Y, making all current people much more miserable as we do so.

The repugnant conclusion is the main reason I reject total utilitarianism (the other one being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness). But the repugnant conclusion can emerge from many other population ethics as well. If adding more people of slightly less happiness than the average is always a bonus ("mere addition"), and if equalising happiness is never a penalty, then you get the repugnant conclusion (caveat: there are some subtleties to do with infinite series).

But repugnant conclusions reached in that way may not be so repugnant, in practice. Let S be a system of population ethics that accepts the repugnant conclusion, due to the argument above. S may indeed conclude that the big world Y is better than the super-human world X. But S need not conclude that Y is the best world we can build, given any fixed and finite amount of resources. Total utilitarianism is indifferent to having a world with half the population and twice the happiness. But S need to be indifferent to that - it may much prefer the twice-happiness world. Instead of the world Y, it may prefer to reallocate resources to instead achieve the world X', which has the same average happiness as X but is slightly larger.

Of course, since it accepts the repugnant conclusion, there will be a barely-worth-living world Y' which it prefers to X'. But then it might prefer reallocating the resources of Y' to the happy world X'', and so on.

This is not an argument for efficiency of resource allocation: even if it's four times as hard to get people twice as happy, S can still want to do so. You can accept the repugnant conclusion and still want to reallocate any fixed amount of resources towards low population and extreme happiness.

It's always best to have some examples, so here is one: an S whose value is the product of average agent happiness times the logarithm of population size.