Distinction between "creating/preventing future lives" and "improving future lives that are already expected to exist"?

5 ericyu3 12 August 2014 06:29AM

I'm writing something (mostly for myself right now) about how if you're somewhat of a utilitarian, a very wide range of population ethics principles (total utilitarianism, average utilitarianism, and critical-level utilitarianism with any critical level) will lead to the population size of some countries being strongly non-neutral, in the sense that changing the number of people in those countries is worth a surprisingly large reduction in average income (>2% income reduction for a 1% population increase/decrease).

Part of what I wrote used an assumption that shared by all the utilitarian population ethics principles I know of: if you prevent the birth of someone with utility X and cause the birth of someone else with utility Y (with Y > X), that's just as good as causing a not-yet-born person to have utility Y instead of X. In fact, population ethics is not needed to make this comparison, since neither outcome changes the population size. But it's not too far-fetched to think that the two situations are different: in the first one, the Y-utility person is a different person from the X-utility person, while in the second one they could be argued to be the same person. Good arguments have been made that the second outcome actually produces a different person because very small things, like which egg/sperm you came from, can change your identity (Parfit's Nonidentity Problem). So I think my assumption is reasonable, but I'm concerned that I don't know what the best arguments against it are.

What are the most well-known utilitarian or non-utilitarian consequentialist theories that make a distinction "different future people" and "the same future person"? Is there a consistent way to make this distinction "fuzzy" so that an event like being conceived by a different sperm is less "identity-changing" than being born on the other side of the world to completely different parents?

Comment author: AlexMennen 09 August 2014 08:01:18AM 2 points [-]

Critical level utilitarianism is isomorphic to total utilitarianism. Utilities are invariant under adding constants but sums of utilities are not, so to use total utilitarianism, you need to pick what level of utility to call 0, which is effectively the same as picking a level of utility to call u0 in critical level utilitarianism.

If you have some canonical way of picking a 0 point for the utility functions which is not the critical level, then it might be more convenient to use CLU so you don't have to change the 0 point, but the difference is purely notational. Your utility=income suggestion doesn't work as such a canonical method in humans because utility isn't proportional to income.

If r > 1, Choice 1 is better, and if r < 1, Choice 2 is better.

Nitpick: only if change in 2 under choice 2 is positive.

Comment author: ericyu3 09 August 2014 11:45:46PM 0 points [-]

Your utility=income suggestion doesn't work as such a canonical method in humans because utility isn't proportional to income.

I just meant that picking a value of u0 is equivalent to picking a value of income ("y0") such that u(y0)=u0.

Population ethics in practice

3 ericyu3 08 August 2014 10:40PM

There are many different ideas about how utilitarians should value the number of future people. Unfortunately, it is difficult to take all of them into account when deciding among public policies, charities, etc. Arguments about principles like total utilitarianism, average utilitarianism, critical-level utilitarianism, etc. often come from a "global" perspective:

  • Does the principle imply that we should have a very large population with a very low quality of life? (Repugnant Conclusion)
  • If average utility is negative, does the principle imply that it's good to add additional people with slightly less negative utility? (Sadistic Conclusion)
  • Is adding additional people valuable when the population is small, but less valuable when it is large? If so, how large does a population have to be to be considered "large"? ("diminishing marginal value" of people)
What these thought experiments have in common is that they aren't very good for making decisions. For instance, simply adding the condition "avoid the Repugnant Conclusion" to a cost-benefit analysis isn't very useful, since it doesn't give any concrete estimate of the value of additional lives. In this post, I'll give an heuristic that lets total, average, and critical-level utilitarianism be analyzed the same way for most decisions. For simplicity, I'll assume that everyone is identical; if people aren't identical, you need to explicitly normalize utility functions before comparing them, but as long you do that, the heuristic is still valid.

Suppose you have N people with utilities u1, ..., uN, and average utility uavg. Total utilitarianism (TU) would maximize the objective function wTU(N, uavg) = N*uavg. Average utilitarianism (AU) would maximize wAU(N, uavg) = uavg, and critical-level utilitarianism would maximize wCLU(N, uavg) = N*(uavg  u0) for some "critical utility" u0. The interpretation is that only lives with utility above u0 are worth living.

It is easy to use CLU in a cost-benefit analysis: creating an additional person with utility u is equally valuable as raising the utility of an existing person from u0 to u. For example, if utility is estimated using income, and $1000/year is the income level corresponding to u0, then creating a person with an income of $2000/year is about as good as doubling the income of someone making $1000/year. TU is the special case of CLU with u= 0, but if there is disagreement about what "zero utility" means, you can estimate the corresponding income level to estimate the magnitude of the disagreement - disagreement between $400 and $500/year is a lot less serious than between $400 and $40000/year.

In general, AU is not a special case of CLU: CLU's objective function is affected by pure changes in population, while AU's is not (∂wCLU/∂N != 0, unless uavg u0). However, for small changes in N and uavg, AU is equivalent to CLU with uuavg. So although AU and CLU are very different "globally", they are equivalent "locally" with the right choice of u0.

How small is a small change? Define the relative value of two choices as r=(change in w under Choice 1)/(change in w under Choice 2). If > 1, Choice 1 is better, and if r < 1, Choice 2 is better. Then the discrepancy between AU and CLU is indicated by rAU / rCLU: if AU favors Choice 1 more than CLU does, this ratio will be larger. As it turns out, rAU / rCLU ≈ 1 - (ΔN / N) to first order in ΔN. If the population is 1% higher under Choice 1 than Choice 2, the discrepancy is only 1%, and as long as r is not extremely close to 1, AU and CLU will agree on which one is better.

But 1% of the world population is 70 million people, and virtually no policy will have that large of an effect. So when applying population ethics to real decisions, I think it's best to act as if CLU is true, and frame disagreements as disagreements about the right value of u0, and which income level corresponds to it. That way, it's much easier to see the practical implications of your viewpoint, and people who disagree in principle may find that they agree in practice about what u0 should be, and therefore about how to choose the best policy/charity/cause/etc. The main exception is existential risk prevention, where success will change the population by a very large amount.

PDF with detailed derivations (uses slightly different notation): https://drive.google.com/file/d/0B-zh2f7_qtukMFhNYkR4alRsSFk/edit?usp=sharing

Comment author: Manfred 08 August 2014 05:51:11AM *  2 points [-]

Certainly in an idealized world the reproductive capacity of a tribe of humans is only limited by the number of women. C.f. Randy the guinea pig, father of 400.

But on the other hand, neither modern humans nor ancestral humans lived in that kind of idealized world. In the modern world we have limited monogamy and reduced pressure to have kids. Somewhere around 18% of women in the U.S. don't end up having kids - I'd expect that a woman surviving would lead to more kids, but not actually 2 more, and similarly a missing man wouldn't just be replaced by the nearest available sperm-producer. I dunno how to put a number to it.

In an ancestral environment close to equilibrium (what you imply by saying that each person has 1 kid on average), the situation is even more egalitarian. That equilibrium is maintained by something other than birth rate. If the issue is limited resources, and if an additional person can gather additional resources, then a man and a woman will both be able to increase the long-term number of children about the same. If the population is growing exponentially but is occasionally devastated by war, a man will lead to a larger population the war is in five years but a woman will lead to a larger population if the war is thirty years. If by disease or famine, there might be very little dependence on gender.

Comment author: ericyu3 08 August 2014 07:26:07AM 0 points [-]

I'd expect that a woman surviving would lead to more kids, but not actually 2 more, and similarly a missing man wouldn't just be replaced by the nearest available sperm-producer. I dunno how to put a number to it.

One way to start estimating it would be to correlate local sex ratios with local birth rates and try to control for as many things as possible. Unfortunately, this is probably very hard to do...

In an ancestral environment close to equilibrium (what you imply by saying that each person has 1 kid on average), the situation is even more egalitarian.

I'm actually most interested in the answer for modern poor countries, which are neither stable in population nor Malthusian. Basically, I'm wondering how interventions that save lives of one gender (but not the other) today will affect the population size 20 to 30 years in the future. Non-replacement fertility doesn't qualitatively change things: the question just becomes whether a life saved increases the population by more or less than "next generation's size / current generation's size". Replacement fertility is just the special case where the ratio is 1; I used that number in my question only for simplicity.

Economics/demographics question: If a child unexpectedly dies, how much does this shrink the next generation?

1 ericyu3 07 August 2014 06:53PM

The answer seems obvious - the next generation will have one fewer person (in expectation) - but it's not that simple, and it's been bugging me for about a day now.

Suppose you are an average 15-year-old, and your parents are too old to have any more children (they won't have more children to "replace" you). The ~2 children you would have had obviously won't be born. Naïvely that means the next generation will be smaller by 2, but this disagrees with the obvious answer (smaller by 1).

Where this reasoning goes wrong is in assuming that everyone else will still have the same number of children. The sex ratio will shift so that the surviving members of your sex have n more children, and the size of the next generation will decrease by 2 minus n. If n is 1, we get the intuitive answer that there'll be 1 less person.

But there's no reason why n has to be 1 for both sexes! If both a boy and a girl die, the sex ratio is unaffected and the next generation will be 1 smaller, so n has to average to 1, but n may or may not be the same between sexes. Have there been any studies estimating the value of "n" for each sex?

(I posted this because it's relevant to population ethics, but I'm not entirely sure whether it belongs here, so I also posted it to Reddit. Should questions like this go in Discussion or in an open thread?)

Comment author: jimmy 21 June 2014 07:37:27PM *  2 points [-]

The example that comes to mind to show the how the sex thing isn't a problem is that of a robot car with a goal to drive as many miles as possible. Every day it will burn through all its fuel and fuel up. Right after it fuels up, it will have no desire for further fuel - more fuel simply does not help it go further at this point, and forcing it can be detrimental. Clearly not contradictory

You could have a similar situation with a couple wanting sex iff they haven't had sex in a day, or wanting an orange if you've just eaten an apple but wanting an apple if you've just eaten an orange.

To strictly show that something violates vNM axioms, you'd have to show that this behavior (in context) can't be fulfilling any preferences better than other options that the agent is aware of - or at least be able to argue that the revealed utility function is contrived and unlikely to hold up in other situations (not what the agent "really wants").

Constantly wanting what one doesn't have can have this defect. If I keep paying you to switch my apple for your orange and back (without actually eating either), then you have a decent case, if you're pretty confident I'm not actually fulfilling my desire to troll you ;)

The "want's a relationship when single" and "wants to be single when not" thing does look like such a violation to me. If you let him flip flop as often as he desires, he's not going to end up happily endorsing his past actions. If you offered him a pill that would prevent him from flip flopping, he very well may take it. So there's a contradiction there.

To bring human-specific psychology into it, its not that his inherent desires are contradictory, but that he wants something like "freedom", which he doesn't know how to get in a relationship and something like "intimacy", which he doesn't know how to get while single. It's not that he want's intimacy when single and freedom when not, it's that he wants both always, but the unfulfilled need is the salient one.

Picture me standing on your left foot. "Oww! Get off my left foot!". Then I switch to the right "Ahh! Get off my right foot!". If you're not very quick and/or the pain is overwhelming, it might take you a few iterations to realize the situation you're in and to put the pain aside while you think of a way to get me off both feet (intimacy when single/freedom in a relationship). Or if you can't have that, it's another challenge to figure out what you want to do about it.

I wouldn't model you as "just VNM-irrational", even if your external behaviors are ineffective for everything you might want. I'd model you as "not knowing how to be VNM-rational in presence of strong pain(s)", and would expect you to start behaving more effectively when shown how.

(and that is what I find, although showing someone how to be more rational is not trivial and "here's a proof of the inconsistency of your actions now pick a side and stop feeling the desire for the other side" is almost never sufficient. You have to be able to model the specific way that they're stuck and meet them there)

tl;dr: We're not VNM-rational because we don't know how to be, not because it's not something we're trying to do.

Comment author: ericyu3 30 July 2014 12:15:16AM 2 points [-]

How do you distinguish his preferences being irrationally inconsistent (he is worse off from entering and leaving relationships repeatedly) from him truly wanting to be in relationships periodically (like how it's rational to alternate between sleeping and waking rather than always doing one or the other)?

If there's a pill that can make him stop switching (but doesn't change his preferences), one of two things will happen: either he'll never be in a relationship (prevented from entering), or he'll stay in his current relationship forever (prevented from leaving). I wouldn't be surprised if he dislikes both of the outcomes and decides not to take the pill.

The pill could instead change his preferences so that he no longer wants to flip-flop, but this argument seems too general - why not just give him a pill that makes him like everything much more than he does now? If my behavior is irrational, I should be able to make myself better off simply by changing my behavior, without having to modify my preferences.

Comment author: [deleted] 18 June 2014 03:15:24AM 1 point [-]

Can you give me an example of this in reality? The math works, but I notice I am still confused, in that values should not just be a variable in utility function... they should in fact change the utility function itself.

If they're relegated to a variable, that seems to go against the original stated goal of wanting moral progress. in which case the utility function originally was constructed wrong.

Comment author: ericyu3 18 June 2014 06:58:22AM 0 points [-]

Define the "partial utility function" as how utility changes with x holding c constant (i.e. U(x) at a particular value of c). Changes in values change this partial utility function, but they never change the full utility function U(c,x). A real-world example: if you prefer to vote for the candidate that gets the most votes, then your vote will depend strongly on the other voters' values, but this preference can still be represented by a single, unchanging utility function.

I don't understand your second paragraph - why would having values as a variable be bad? It's certainly possible to change the utility function, but AlexMennen's point was the future values could still be taken into account even with a static utility function. If the utility function is constant and also depends on current values, then it needs to values into account as an argument (i.e. a variable).

Comment author: [deleted] 18 June 2014 01:59:06AM 2 points [-]

How are you changing the values you optimize for without changing your utility function? This now seems even more handwavey to me.

Comment author: ericyu3 18 June 2014 02:31:09AM *  0 points [-]

Consider a very simple model where the world has just two variables, represented by real numbers: cultural values (c) and the other variable (x). Our utility function is U(c, x)=c*x, which is clearly constant over time. However, our preferred value of x will strongly depend on cultural values: if c is negative, we want to minimize x, while if c is positive, we want to maximize x.

This model is so simple that it behaves quite strangely (e.g. it says you want to pick cultural values that view the current state of the world favorably), but it shows that by adding complexity to your utility function, you can make it depend on many things without actually changing over time.

In response to Links!
Comment author: ericyu3 08 June 2014 08:13:44PM 0 points [-]

There's a new subreddit for GIFs of cute baby elephants: http://www.reddit.com/r/babyelephantgifs

Comment author: gjm 09 April 2014 01:42:30PM 0 points [-]
  1. Oh, I see. You're taking wage to be determined by production, which in turn is determined by population according to the Cobb-Douglas formula, and then asking "what's the optimal population?". Got it.

  2. Yup, better now.

So, anyway, now that I understand your argument better, there's something that looks both important and wrong, but maybe I'm misunderstanding. You're assuming that A -- the constant factor in the Cobb-Douglas formula -- is the same for all countries. But surely it isn't, and surely this accounts for a large amount of the variation in productivity and wealth between countries. It seems like this would lead to big differences in w between countries even if they're all close to optimal population.

Comment author: ericyu3 10 April 2014 08:47:44AM *  0 points [-]

The A factor drops out of the final expression for the optimal wage. If the form of the production function is the same between two countries, their optimal wages will be the same as well. However, their optimal populations will obviously be different. For example, if country 1 has 10 times higher A than country 2, but their values of alpha are the same, then their optimum wages are the same, but country 1's optimum population is higher by a factor of 10^(1/(1-alpha)).

Here, A lumps together productivity and the amount of land a country has (so that a large poor country may have higher A than a small rich one). Obviously, increasing A will increase welfare, but it won't change the optimal wage (if the country is above that level already, increasing A will bring wages further away from the optimum) - the best thing to do (according to this model) is to increase A as much as possible, and also adjust the population level to match the optimal wage.

View more: Prev | Next