Link: The Openness-Equality Trade-Off in Global Redistribution
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2509305
A very interesting (draft of a) paper that discusses trade-offs between immigrants' civil/political rights and the number of immigrants allowed. Is it better to decrease inequality within a rich country by treating immigrants well, or is it better to let in more immigrants with fewer rights?
If interventions changing population size are cheap, they may be the best option independent of your population ethics
In this post I'll explain why you might want to assist altruistic interventions that change the size of the world population regardless of how valuable you think additional lives are. The argument relies on a combination of 2 population-changing interventions that combine to produce the effect of a non-population-changing intervention, but at a lower cost.
Suppose you can donate to the following 3 interventions:
- "Growth": increase one future person's income from $500/yr to $5,000/yr for $10,000
- "Plus": cause one more person to be born in a middle-income country (income ~$5,000/yr) for $6,000
- "Minus": cause one less person to be born in a poor country (income ~$500/yr) for $1,000
- Plus+Minus is more costly than Growth in reality (quite likely)
- Growth and Plus+Minus are actually not equivalent, since Growth actually helps a particular person (again, see my last post)
- Education about contraception
- Having children yourself (cost varies from person to person)
- Paying others to have children
- Subsidizing contraception
- Subsidizing surrogacy (there are replaceability issues here, but I couldn't find any estimates of supply/demand elasticity)
- Being a surrogate yourself (doesn't cost you any money, but can be unpleasant, so the cost varies from person to person)
Distinction between "creating/preventing future lives" and "improving future lives that are already expected to exist"?
I'm writing something (mostly for myself right now) about how if you're somewhat of a utilitarian, a very wide range of population ethics principles (total utilitarianism, average utilitarianism, and critical-level utilitarianism with any critical level) will lead to the population size of some countries being strongly non-neutral, in the sense that changing the number of people in those countries is worth a surprisingly large reduction in average income (>2% income reduction for a 1% population increase/decrease).
Part of what I wrote used an assumption that shared by all the utilitarian population ethics principles I know of: if you prevent the birth of someone with utility X and cause the birth of someone else with utility Y (with Y > X), that's just as good as causing a not-yet-born person to have utility Y instead of X. In fact, population ethics is not needed to make this comparison, since neither outcome changes the population size. But it's not too far-fetched to think that the two situations are different: in the first one, the Y-utility person is a different person from the X-utility person, while in the second one they could be argued to be the same person. Good arguments have been made that the second outcome actually produces a different person because very small things, like which egg/sperm you came from, can change your identity (Parfit's Nonidentity Problem). So I think my assumption is reasonable, but I'm concerned that I don't know what the best arguments against it are.
What are the most well-known utilitarian or non-utilitarian consequentialist theories that make a distinction "different future people" and "the same future person"? Is there a consistent way to make this distinction "fuzzy" so that an event like being conceived by a different sperm is less "identity-changing" than being born on the other side of the world to completely different parents?
Population ethics in practice
There are many different ideas about how utilitarians should value the number of future people. Unfortunately, it is difficult to take all of them into account when deciding among public policies, charities, etc. Arguments about principles like total utilitarianism, average utilitarianism, critical-level utilitarianism, etc. often come from a "global" perspective:
- Does the principle imply that we should have a very large population with a very low quality of life? (Repugnant Conclusion)
- If average utility is negative, does the principle imply that it's good to add additional people with slightly less negative utility? (Sadistic Conclusion)
- Is adding additional people valuable when the population is small, but less valuable when it is large? If so, how large does a population have to be to be considered "large"? ("diminishing marginal value" of people)
But 1% of the world population is 70 million people, and virtually no policy will have that large of an effect. So when applying population ethics to real decisions, I think it's best to act as if CLU is true, and frame disagreements as disagreements about the right value of u0, and which income level corresponds to it. That way, it's much easier to see the practical implications of your viewpoint, and people who disagree in principle may find that they agree in practice about what u0 should be, and therefore about how to choose the best policy/charity/cause/etc. The main exception is existential risk prevention, where success will change the population by a very large amount.
PDF with detailed derivations (uses slightly different notation): https://drive.google.com/file/d/0B-zh2f7_qtukMFhNYkR4alRsSFk/edit?usp=sharing
Economics/demographics question: If a child unexpectedly dies, how much does this shrink the next generation?
The answer seems obvious - the next generation will have one fewer person (in expectation) - but it's not that simple, and it's been bugging me for about a day now.
Suppose you are an average 15-year-old, and your parents are too old to have any more children (they won't have more children to "replace" you). The ~2 children you would have had obviously won't be born. Naïvely that means the next generation will be smaller by 2, but this disagrees with the obvious answer (smaller by 1).
Where this reasoning goes wrong is in assuming that everyone else will still have the same number of children. The sex ratio will shift so that the surviving members of your sex have n more children, and the size of the next generation will decrease by 2 minus n. If n is 1, we get the intuitive answer that there'll be 1 less person.
But there's no reason why n has to be 1 for both sexes! If both a boy and a girl die, the sex ratio is unaffected and the next generation will be 1 smaller, so n has to average to 1, but n may or may not be the same between sexes. Have there been any studies estimating the value of "n" for each sex?
(I posted this because it's relevant to population ethics, but I'm not entirely sure whether it belongs here, so I also posted it to Reddit. Should questions like this go in Discussion or in an open thread?)
A strange implication of critical-level utilitarianism
Suppose you have a population of identical people, each with the same income
. Critical-level utilitarianism (CLU) says that you should maximize utility above a certain "critical level" - in this case, if each person's utility function is
, we want maximize
for some income level
below which you don't think life is worth living. (Critical-level utilitarianism doesn't help us pick which value of
to use, though.) To choose the optimal level of
, we need to know which combinations are feasible. Cobb-Douglas production functions (which are essentially power functions) are frequently used to model how much output can be produced with a given amount of input, so I will use them here. If total income is given by the Cobb-Douglas production function
, then both the average and the marginal productivity of labor are proportional to
, so the average income should scale the same way. If we assume that each person's utility scales as the logarithm of their income, we get that
. Optimizing, we set
, so
and
. Usually,
is greater than 0 but less than 1, so the optimal per-capita income ranges from the critical income level to
-fold above it. A pessimistic production function with
would have an optimal per-capita income that's a factor of
above the critical level.
This calculation uncovered an interesting (to me, at least) paradox. The current variation in per-capita incomes across countries is far larger than -fold or even
-fold, so according to this model, regardless of the critical income level you choose, either a lot of countries are far below the critical level (and the lives of the people in them are not worth living by a large margin), or a lot of countries are far above the optimal income level (and their populations should increase until their per-capita incomes decrease by a large amount). This implies some strange priorities: if the critical level is high, then it'd be good to reduce the population of a low-income country even if it decreases their per-capita income greatly, and if the critical level is low, poverty reduction in high-income is unimportant, since it'd be much better to increase the birth rate of poor (in high-income countries) people.
Although this model is very simple, it's fairly robust to changes in the production function or the utility function, since the differences in income between countries are so large. The major assumption I made is that CLU is reasonable. This post applies to total utilitarianism as well, since it is a special case of CLU.
What do you think? Do you agree with CLU or a similar theory? If you do, what are the problems with my model? If you agree with CLU and think this model is basically correct, what does it imply about what an effective altruist's priorities should be?
What can total utilitarians learn from empirical estimates of the value of a statistical life?
This post was inspired by Carl Shulman's blog post from last month—if you have time, read that first, since this is basically a response to it. My goal here is to combine
- Empirical studies of how much people are willing to pay to reduce their risk of death, and
- The "total utilitarian" assumption that potential people are as important as existing people, and the value of an additional person is independent of the number of preexisting people
- An additional (quite strong!) assumption that the utility gain from being born and becoming an adult is the same as the utility loss from a premature adult death
Suppose everyone has identical preferences, and only two variables affect expected utility: their probability of survival and their income
. Since von Neumann–Morgenstern utility functions are invariant under affine transformations, we can define the utility of being dead as 0 and still have one degree of freedom left (two utility functions are equivalent iff they are related by a positive linear transformation). Fixing a reference (minimum) income level
, we can always the write the utility function as
,
where is some function defined on
with
. This condition ensures that
is the utility at the minimum income. For instance, if utility from income is logarithmic, we can let
. A logarithm with any other base can be turned into
by a linear transformation, so the choice of base doesn't matter.
We can infer from empirical estimates of the value of a statistical life if we have a hypothesis for the form of
—so total utilitarians should pay a lot of attention to these estimates! If you're willing to pay
for a small relative increase in your probability of survival,
(as opposed to an absolute increase
), then your value of life is defined as
.
If your utility from income takes the same form as and you're rational, then it's also true that
.
In other words, the value of life is the marginal rate of substitution between income and log survival probability. So
and
.
In the case of , we have
.
$6 million is a reasonable estimate (although on the low side) for the value of a statistical life. is in units of income, so the $6M estimate needs to be translated into an income stream. At an interest rate of 3% over 40 years, this will require payments of ~$257,582 per year. If the $6M estimate was for people making $50,000 a year, then
. With
at $300 per year, this gives us
. It's just a coincidence that
is so close to 0: slightly different parameters will shift
substantially away from that point. I biased all my parameter estimates (except for the interest rate, which I understand very poorly) so that
would have a downward bias, so if my estimates are wrong
is probably higher.
I'm not going to draw any conclusions about what a total utilitarian should do, since there are many problems with this method of estimation:
- The value-of-statistical-life studies are from high-income countries, so it's questionable to extrapolate to very low incomes.
- Utility from income probably isn't logarithmic, since people exhibit relative risk aversion.
- The value of
depends strongly on the interest rate.
- I assumed that somebody with $300 per year has the same life expectancy as someone with $50K per year. This isn't as big of a problem as it seems. If they live half as long, you can compare two $300/year people versus one $50K/year person and get a similar result.
How does the value placed on creating new lives versus improving existing ones affect how an effective altruist should act?
I'm not familiar with the terminology used by people who talk about effective giving, so I'll be using a lot of scare quotes and this post will probably be very imprecise.
Say that you are a total utilitarian (meaning that you think doubling the number of lives is as good as doubling the utility of every life). Even if you believe that increasing the number of lives is good, it's not clear what the marginal cost of additional lives is (for you, given the options available to you). If it is high, then whether you are a total or average utilitarian doesn't matter much, except that a total utilitarian wouldn't want to fund family planning, education, etc. So the effectiveness of donations meant to increase "time lived" strongly affects how differently total-utilitarian and "typical" EAs should behave.
For concreteness, suppose someone are trying to maximize the total number of QALYs (quality-adjusted life years) lived in the next 100 years by all people who are alive at any point in this time interval - this is your "social welfare function" as an EA. A big limitation of this is that QALYs don't take non-health-related quality of life factors into account; I chose them because I wanted the social welfare function to be relatively easy to calculate.
People who donate to charities generally don't have population growth as an explicit goal, so at first it seems like a total utilitarian EA should act very differently. However, a lot of the most effective charities, as judged by GiveWell, are public health initiatives which greatly increase average QALYs. They probably increase total QALYs as well, although the effect of health improvements on fertility needs to be considered. Another issue is that due to the lack of donors, there aren't many (or any) charities that increase total QALYs in a cost-effective way. Public health initiatives that increase total QALYs probably don't do it in an optimal way, since the increase is just a side effect of their main goal of improved health.
How should the donations of a total-QALYs-maximizing EA differ from:
- Your donations if you were an EA acting according to your own values,
- How the typical person you know would donate if they were an EA,
- How an average-utilitarian EA would donate, and
- What GiveWell would recommend?
Also, I've read up about "earning to give," and it seems like a reasonable strategy for EAs who have values shared by many charities. Is this also reasonable for a total-QALYs-maximizing EA? JonahSinick thinks that EAs with "unusual values" might benefit more from earning to give, but this seems strange to me, since there's unlikely to be an effective charity working toward goals that few people share.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)