Link: The Openness-Equality Trade-Off in Global Redistribution

2 ericyu3 18 October 2014 02:45AM

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2509305

A very interesting (draft of a) paper that discusses trade-offs between immigrants' civil/political rights and the number of immigrants allowed. Is it better to decrease inequality within a rich country by treating immigrants well, or is it better to let in more immigrants with fewer rights?

If interventions changing population size are cheap, they may be the best option independent of your population ethics

6 ericyu3 13 August 2014 03:03AM

In this post I'll explain why you might want to assist altruistic interventions that change the size of the world population regardless of how valuable you think additional lives are. The argument relies on a combination of 2 population-changing interventions that combine to produce the effect of a non-population-changing intervention, but at a lower cost.

Suppose you can donate to the following 3 interventions:

  • "Growth": increase one future person's income from $500/yr to $5,000/yr for $10,000
  • "Plus": cause one more person to be born in a middle-income country (income ~$5,000/yr) for $6,000
  • "Minus": cause one less person to be born in a poor country (income ~$500/yr) for $1,000
Assume that the interventions are independent, and that donating multiples of the cost produces multiples of the effect without diminishing returns.

The cost estimates are completely made up; the point of this post is to explain what happens if the total cost of Plus and Minus is less than the cost of Growth. The cost of Plus is probably least well-known, since it's the least popular of the 3. Also, in the real world, you would probably want to spread the impact of $10,000 across at least several people instead of increasing one person's income by 10x, but I think the post makes more sense this way. If you know a more reasonable estimate for the costs, please post them!

If you donate to Plus and Minus, the total effect is the same as the effect of Growth in many ways - in the future, there is one more person with income $5,000, one less person with income $500, and the size of the world population remains the same. In my last post, I asked about whether consequentialists actually view the two outcomes as equivalent, and people seemed to think yes, so it's reasonable to say that Plus+Minus is just as beneficial as Growth. But Plus+Minus only costs $7,000 while Growth costs $10,000, so regardless of your population ethics, you should prefer donating to Plus+Minus.

But unless your population ethics are "fine-tuned" to make Plus and Minus equally cost-effective, one of them will be clearly better (more cost-effective) than the other. If you think Minus is better than Plus, then Minus is better than Plus+Minus, which is better than Growth, so you should donate exclusively to Minus. The same argument applies if you think Plus is better than Minus. If you donate to only one of Plus and Minus, you will change the size of the world population. So this seems to show that if population-changing interventions are cheap, you should act to change population size regardless of what you think about population ethics. Even if you are very uncertain what the value of a new life is, you can still use your best guess to decide between Plus and Minus as long as you are risk-neutral about how much good you do. 

Numerical example: suppose that Growth yields 100 "points" of benefit, where "point" is an arbitrary unit. Then regardless of population ethics, Plus+Minus yields 100 points as well. How these points are distributed between Plus and Minus depends on your population ethics, however. If you are a total utilitarian, you might say that Minus is worth -20 points and Plus is worth 120 points, and if you're a negative utilitarian, you might say that Minus is worth 150 points and Plus -50 points. If you're an average utilitarian, you might say that Minus is worth 70 and Plus is worth 30. But these all sum up to 100, and they would all choose Plus or Minus over Growth: Plus for the total utilitarian and Minus for the others.

What might be wrong with this reasoning? I can think of a few things:
  1. Plus+Minus is more costly than Growth in reality (quite likely)
  2. Growth and Plus+Minus are actually not equivalent, since Growth actually helps a particular person (again, see my last post)
I'm really curious about what the costs of economic-growth and population interventions are. I'd guess that population interventions would be competitive with unconditional cash transfer programs like GiveDirectly, but I don't know that much about their effectiveness, and I don't know whether there are economic interventions that are more cost-effective than cash transfers. Here are some population interventions that can be done or funded by individuals:
  • Education about contraception
  • Having children yourself (cost varies from person to person)
  • Paying others to have children
  • Subsidizing contraception
  • Subsidizing surrogacy (there are replaceability issues here, but I couldn't find any estimates of supply/demand elasticity)
  • Being a surrogate yourself (doesn't cost you any money, but can be unpleasant, so the cost varies from person to person)
Have people made estimates of how cost-effective these are? The Plus+Minus vs. Growth hypothetical doesn't work if Growth is actually cheaper, so I want to know if I'm thinking too much about something irrelevant!

 

Distinction between "creating/preventing future lives" and "improving future lives that are already expected to exist"?

5 ericyu3 12 August 2014 06:29AM

I'm writing something (mostly for myself right now) about how if you're somewhat of a utilitarian, a very wide range of population ethics principles (total utilitarianism, average utilitarianism, and critical-level utilitarianism with any critical level) will lead to the population size of some countries being strongly non-neutral, in the sense that changing the number of people in those countries is worth a surprisingly large reduction in average income (>2% income reduction for a 1% population increase/decrease).

Part of what I wrote used an assumption that shared by all the utilitarian population ethics principles I know of: if you prevent the birth of someone with utility X and cause the birth of someone else with utility Y (with Y > X), that's just as good as causing a not-yet-born person to have utility Y instead of X. In fact, population ethics is not needed to make this comparison, since neither outcome changes the population size. But it's not too far-fetched to think that the two situations are different: in the first one, the Y-utility person is a different person from the X-utility person, while in the second one they could be argued to be the same person. Good arguments have been made that the second outcome actually produces a different person because very small things, like which egg/sperm you came from, can change your identity (Parfit's Nonidentity Problem). So I think my assumption is reasonable, but I'm concerned that I don't know what the best arguments against it are.

What are the most well-known utilitarian or non-utilitarian consequentialist theories that make a distinction "different future people" and "the same future person"? Is there a consistent way to make this distinction "fuzzy" so that an event like being conceived by a different sperm is less "identity-changing" than being born on the other side of the world to completely different parents?

Population ethics in practice

3 ericyu3 08 August 2014 10:40PM

There are many different ideas about how utilitarians should value the number of future people. Unfortunately, it is difficult to take all of them into account when deciding among public policies, charities, etc. Arguments about principles like total utilitarianism, average utilitarianism, critical-level utilitarianism, etc. often come from a "global" perspective:

  • Does the principle imply that we should have a very large population with a very low quality of life? (Repugnant Conclusion)
  • If average utility is negative, does the principle imply that it's good to add additional people with slightly less negative utility? (Sadistic Conclusion)
  • Is adding additional people valuable when the population is small, but less valuable when it is large? If so, how large does a population have to be to be considered "large"? ("diminishing marginal value" of people)
What these thought experiments have in common is that they aren't very good for making decisions. For instance, simply adding the condition "avoid the Repugnant Conclusion" to a cost-benefit analysis isn't very useful, since it doesn't give any concrete estimate of the value of additional lives. In this post, I'll give an heuristic that lets total, average, and critical-level utilitarianism be analyzed the same way for most decisions. For simplicity, I'll assume that everyone is identical; if people aren't identical, you need to explicitly normalize utility functions before comparing them, but as long you do that, the heuristic is still valid.

Suppose you have N people with utilities u1, ..., uN, and average utility uavg. Total utilitarianism (TU) would maximize the objective function wTU(N, uavg) = N*uavg. Average utilitarianism (AU) would maximize wAU(N, uavg) = uavg, and critical-level utilitarianism would maximize wCLU(N, uavg) = N*(uavg  u0) for some "critical utility" u0. The interpretation is that only lives with utility above u0 are worth living.

It is easy to use CLU in a cost-benefit analysis: creating an additional person with utility u is equally valuable as raising the utility of an existing person from u0 to u. For example, if utility is estimated using income, and $1000/year is the income level corresponding to u0, then creating a person with an income of $2000/year is about as good as doubling the income of someone making $1000/year. TU is the special case of CLU with u= 0, but if there is disagreement about what "zero utility" means, you can estimate the corresponding income level to estimate the magnitude of the disagreement - disagreement between $400 and $500/year is a lot less serious than between $400 and $40000/year.

In general, AU is not a special case of CLU: CLU's objective function is affected by pure changes in population, while AU's is not (∂wCLU/∂N != 0, unless uavg u0). However, for small changes in N and uavg, AU is equivalent to CLU with uuavg. So although AU and CLU are very different "globally", they are equivalent "locally" with the right choice of u0.

How small is a small change? Define the relative value of two choices as r=(change in w under Choice 1)/(change in w under Choice 2). If > 1, Choice 1 is better, and if r < 1, Choice 2 is better. Then the discrepancy between AU and CLU is indicated by rAU / rCLU: if AU favors Choice 1 more than CLU does, this ratio will be larger. As it turns out, rAU / rCLU ≈ 1 - (ΔN / N) to first order in ΔN. If the population is 1% higher under Choice 1 than Choice 2, the discrepancy is only 1%, and as long as r is not extremely close to 1, AU and CLU will agree on which one is better.

But 1% of the world population is 70 million people, and virtually no policy will have that large of an effect. So when applying population ethics to real decisions, I think it's best to act as if CLU is true, and frame disagreements as disagreements about the right value of u0, and which income level corresponds to it. That way, it's much easier to see the practical implications of your viewpoint, and people who disagree in principle may find that they agree in practice about what u0 should be, and therefore about how to choose the best policy/charity/cause/etc. The main exception is existential risk prevention, where success will change the population by a very large amount.

PDF with detailed derivations (uses slightly different notation): https://drive.google.com/file/d/0B-zh2f7_qtukMFhNYkR4alRsSFk/edit?usp=sharing

Economics/demographics question: If a child unexpectedly dies, how much does this shrink the next generation?

1 ericyu3 07 August 2014 06:53PM

The answer seems obvious - the next generation will have one fewer person (in expectation) - but it's not that simple, and it's been bugging me for about a day now.

Suppose you are an average 15-year-old, and your parents are too old to have any more children (they won't have more children to "replace" you). The ~2 children you would have had obviously won't be born. Naïvely that means the next generation will be smaller by 2, but this disagrees with the obvious answer (smaller by 1).

Where this reasoning goes wrong is in assuming that everyone else will still have the same number of children. The sex ratio will shift so that the surviving members of your sex have n more children, and the size of the next generation will decrease by 2 minus n. If n is 1, we get the intuitive answer that there'll be 1 less person.

But there's no reason why n has to be 1 for both sexes! If both a boy and a girl die, the sex ratio is unaffected and the next generation will be 1 smaller, so n has to average to 1, but n may or may not be the same between sexes. Have there been any studies estimating the value of "n" for each sex?

(I posted this because it's relevant to population ethics, but I'm not entirely sure whether it belongs here, so I also posted it to Reddit. Should questions like this go in Discussion or in an open thread?)

A strange implication of critical-level utilitarianism

-1 ericyu3 05 April 2014 07:54AM

Suppose you have a population of  identical people, each with the same income . Critical-level utilitarianism (CLU) says that you should maximize utility above a certain "critical level" - in this case, if each person's utility function is , we want maximize  for some income level  below which you don't think life is worth living. (Critical-level utilitarianism doesn't help us pick which value of  to use, though.) To choose the optimal level of , we need to know which combinations are feasible. Cobb-Douglas production functions (which are essentially power functions) are frequently used to model how much output can be produced with a given amount of input, so I will use them here. If total income is given by the Cobb-Douglas production function , then both the average and the marginal productivity of labor are proportional to , so the average income should scale the same way. If we assume that each person's utility scales as the logarithm of their income, we get that . Optimizing, we set , so  and . Usually,  is greater than 0 but less than 1, so the optimal per-capita income ranges from the critical income level to -fold above it. A pessimistic production function with  would have an optimal per-capita income that's a factor of  above the critical level.

This calculation uncovered an interesting (to me, at least) paradox. The current variation in per-capita incomes across countries is far larger than -fold or even -fold, so according to this model, regardless of the critical income level you choose, either a lot of countries are far below the critical level (and the lives of the people in them are not worth living by a large margin), or a lot of countries are far above the optimal income level (and their populations should increase until their per-capita incomes decrease by a large amount). This implies some strange priorities: if the critical level is high, then it'd be good to reduce the population of a low-income country even if it decreases their per-capita income greatly, and if the critical level is low, poverty reduction in high-income is unimportant, since it'd be much better to increase the birth rate of poor (in high-income countries) people.

Although this model is very simple, it's fairly robust to changes in the production function or the utility function, since the differences in income between countries are so large. The major assumption I made is that CLU is reasonable. This post applies to total utilitarianism as well, since it is a special case of CLU.

What do you think? Do you agree with CLU or a similar theory? If you do, what are the problems with my model? If you agree with CLU and think this model is basically correct, what does it imply about what an effective altruist's priorities should be?

What can total utilitarians learn from empirical estimates of the value of a statistical life?

1 ericyu3 15 February 2014 09:23AM

This post was inspired by Carl Shulman's blog post from last month—if you have time, read that first, since this is basically a response to it. My goal here is to combine

  1. Empirical studies of how much people are willing to pay to reduce their risk of death, and
  2. The "total utilitarian" assumption that potential people are as important as existing people, and the value of an additional person is independent of the number of preexisting people
  3. An additional (quite strong!) assumption that the utility gain from being born and becoming an adult is the same as the utility loss from a premature adult death
and see whether it's more effective for a total utilitarian to improve the incomes of existing people or to increase/decrease the total number of people.

Suppose everyone has identical preferences, and only two variables affect expected utility: their probability of survival  and their income . Since von Neumann–Morgenstern utility functions are invariant under affine transformations, we can define the utility of being dead as 0 and still have one degree of freedom left (two utility functions are equivalent iff they are related by a positive linear transformation). Fixing a reference (minimum) income level , we can always the write the utility function as

,

where  is some function defined on  with . This condition ensures that  is the utility at the minimum income. For instance, if utility from income is logarithmic, we can let . A logarithm with any other base can be turned into  by a linear transformation, so the choice of base doesn't matter.

We can infer  from empirical estimates of the value of a statistical life if we have a hypothesis for the form of —so total utilitarians should pay a lot of attention to these estimates! If you're willing to pay  for a small relative increase in your probability of survival,  (as opposed to an absolute increase ), then your value of life is defined as

.

If your utility from income takes the same form as  and you're rational, then it's also true that

.

In other words, the value of life is the marginal rate of substitution between income and log survival probability. So

and

.

In the case of , we have

.

$6 million is a reasonable estimate (although on the low side) for the value of a statistical life.  is in units of income, so the $6M estimate needs to be translated into an income stream. At an interest rate of 3% over 40 years, this will require payments of ~$257,582 per year. If the $6M estimate was for people making $50,000 a year, then . With  at $300 per year, this gives us . It's just a coincidence that  is so close to 0: slightly different parameters will shift  substantially away from that point. I biased all my parameter estimates (except for the interest rate, which I understand very poorly) so that  would have a downward bias, so if my estimates are wrong  is probably higher.

I'm not going to draw any conclusions about what a total utilitarian should do, since there are many problems with this method of estimation:

  • The value-of-statistical-life studies are from high-income countries, so it's questionable to extrapolate to very low incomes.
  • Utility from income probably isn't logarithmic, since people exhibit relative risk aversion.
  • The value of  depends strongly on the interest rate.
  • I assumed that somebody with $300 per year has the same life expectancy as someone with $50K per year. This isn't as big of a problem as it seems. If they live half as long, you can compare two $300/year people versus one $50K/year person and get a similar result.
The estimate of  is extremely sensitive to the inputs, partly because it's calculated as the difference of two much larger numbers (caused by the value-of-life estimates being from a high income level), partly because I don't know exactly what level of income value-of-life is calculated at, and partly because the relationship between annual income and a lump sum of money depends on the interest rate (I don't know what rates were used for the value-of-life estimates).

Any suggestions on how to make a more robust estimate?

(A big thanks to http://lwlatex.appspot.com/ for helping me format the equations!)

How does the value placed on creating new lives versus improving existing ones affect how an effective altruist should act?

1 ericyu3 08 November 2013 02:48AM

I'm not familiar with the terminology used by people who talk about effective giving, so I'll be using a lot of scare quotes and this post will probably be very imprecise.

Say that you are a total utilitarian (meaning that you think doubling the number of lives is as good as doubling the utility of every life). Even if you believe that increasing the number of lives is good, it's not clear what the marginal cost of additional lives is (for you, given the options available to you). If it is high, then whether you are a total or average utilitarian doesn't matter much, except that a total utilitarian wouldn't want to fund family planning, education, etc. So the effectiveness of donations meant to increase "time lived" strongly affects how differently total-utilitarian and "typical" EAs should behave.

For concreteness, suppose someone are trying to maximize the total number of QALYs (quality-adjusted life years) lived in the next 100 years by all people who are alive at any point in this time interval - this is your "social welfare function" as an EA. A big limitation of this is that QALYs don't take non-health-related quality of life factors into account; I chose them because I wanted the social welfare function to be relatively easy to calculate.

People who donate to charities generally don't have population growth as an explicit goal, so at first it seems like a total utilitarian EA should act very differently. However, a lot of the most effective charities, as judged by GiveWell, are public health initiatives which greatly increase average QALYs. They probably increase total QALYs as well, although the effect of health improvements on fertility needs to be considered. Another issue is that due to the lack of donors, there aren't many (or any) charities that increase total QALYs in a cost-effective way. Public health initiatives that increase total QALYs probably don't do it in an optimal way, since the increase is just a side effect of their main goal of improved health.

How should the donations of a total-QALYs-maximizing EA differ from:

  1. Your donations if you were an EA acting according to your own values,
  2. How the typical person you know would donate if they were an EA,
  3. How an average-utilitarian EA would donate, and
  4. What GiveWell would recommend?

Also, I've read up about "earning to give," and it seems like a reasonable strategy for EAs who have values shared by many charities. Is this also reasonable for a total-QALYs-maximizing EA? JonahSinick thinks that EAs with "unusual values" might benefit more from earning to give, but this seems strange to me, since there's unlikely to be an effective charity working toward goals that few people share.

View more: Next