Comment author: gjm 05 April 2014 10:25:07PM 3 points [-]

There are several things here I fail to understand.

  1. Why d/dN? If you're looking for optimal income per capita, you need d/dw=0 not d/dN=0.

  2. The result you've allegedly reached is that w = w0 exp(alpha-1) where alpha<1, which means w<w0, which means you're not actually in the regime where net utility equals N[U(w)-U(w0)], so you've been doing calculus on the wrong formulae.

  3. Clearly utility is not only a function of income. (Even considering only money, you need to consider assets as well as income.) Of course considering only income is a handy simplification that may turn something impossibly complicated into something susceptible to analysis, but I think you should be explicit about making that simplification because the importance of things other than money is actually a pretty big deal.

  4. This all seems like a more complicated but still minor variation on simple and familiar observations like these: (a) simple versions of utilitarianism say well-off people should give almost all they have to poorer people; (b) simple versions of average utilitarianism say we should kill all the least happy people; (c) simple versions of total utilitarianism say we should prefer an enormous population of people with just-better-than-nothing lives to a normal-sized population of very happy people. I would expect solutions to (or bullet-biting on) these problems to deal with the more complicated but similarly counterintuitive conclusions presented here (assuming for the sake of argument that either my objections above are wrong or else the conclusions remain when the errors are repaired).

Comment author: ericyu3 07 April 2014 04:04:13PM *  0 points [-]
  1. I was unclear there - I'm finding the optimal wage at the optimal population level, not the maximum possible wage.
  2. Whoops, I meant 1-alpha. Fixed.
  3. Non-income factors are important, but I didn't consider them here because they're less obviously related to the population level.
  4. I was trying to say that even taking resource constraints, the critical income and the optimal income don't differ by that much compared to how much countries currently differ in income. Critical-level utilitarianism is supposed to be a "compromise" between total and average utilitarianism, but it would still yield strange conclusions in today's world.

A strange implication of critical-level utilitarianism

-1 ericyu3 05 April 2014 07:54AM

Suppose you have a population of  identical people, each with the same income . Critical-level utilitarianism (CLU) says that you should maximize utility above a certain "critical level" - in this case, if each person's utility function is , we want maximize  for some income level  below which you don't think life is worth living. (Critical-level utilitarianism doesn't help us pick which value of  to use, though.) To choose the optimal level of , we need to know which combinations are feasible. Cobb-Douglas production functions (which are essentially power functions) are frequently used to model how much output can be produced with a given amount of input, so I will use them here. If total income is given by the Cobb-Douglas production function , then both the average and the marginal productivity of labor are proportional to , so the average income should scale the same way. If we assume that each person's utility scales as the logarithm of their income, we get that . Optimizing, we set , so  and . Usually,  is greater than 0 but less than 1, so the optimal per-capita income ranges from the critical income level to -fold above it. A pessimistic production function with  would have an optimal per-capita income that's a factor of  above the critical level.

This calculation uncovered an interesting (to me, at least) paradox. The current variation in per-capita incomes across countries is far larger than -fold or even -fold, so according to this model, regardless of the critical income level you choose, either a lot of countries are far below the critical level (and the lives of the people in them are not worth living by a large margin), or a lot of countries are far above the optimal income level (and their populations should increase until their per-capita incomes decrease by a large amount). This implies some strange priorities: if the critical level is high, then it'd be good to reduce the population of a low-income country even if it decreases their per-capita income greatly, and if the critical level is low, poverty reduction in high-income is unimportant, since it'd be much better to increase the birth rate of poor (in high-income countries) people.

Although this model is very simple, it's fairly robust to changes in the production function or the utility function, since the differences in income between countries are so large. The major assumption I made is that CLU is reasonable. This post applies to total utilitarianism as well, since it is a special case of CLU.

What do you think? Do you agree with CLU or a similar theory? If you do, what are the problems with my model? If you agree with CLU and think this model is basically correct, what does it imply about what an effective altruist's priorities should be?

Comment author: gjm 16 February 2014 01:48:08AM 0 points [-]

The two links you give to discussions of "the statistical value of a life" are discussing very different things. Thing One: An extrapolation (from infinitesimal changes to the change from p=0 to p=1) of the dollar-value to a given individual of their survival. Thing Two: An estimate of the dollar-value placed by society on a person's survival.

Thing One (which is what your VL is measuring) is inevitably going to be very sensitive to the person's wealth. Thing Two needn't be, and in fact isn't (most modern societies are willing to go to about as much trouble to save a poor person's life as a rich person's). I think the $6M figure you cite is more a Thing Two than a Thing One.

If we take your calculations at face value, here is what they tell us. We start with a broadly-plausible estimate that in some sense a life is worth about $6M. We suppose that a "typical" life corresponds to an income of about $50k/year. We do some calculations. And we arrive at the conclusion that the life of a very poor person -- someone whose income is your y0 of $300/year -- is worth something on the order of $250. (!!!)

First reaction: This is a reductio ad absurdum: something must be desperately wrong here. Second reaction: Well, maybe not so much; this is not really about assigning different values to rich and poor people's lives, but about how they, in their very different financial situations, convert between utility and money. Third reaction: No, wait, this really is about assigning different values to these people's lives; in particular there is an income level (not very far from y0, in this particular model) at which the utility reaches zero, and no talk of conversion factors will change that.

So I think you either need to bite the bullet and say that very poor people's lives aren't worth saving, or reconsider some assumptions. (Fiddling with the details of the utility function, etc., as in your closing comments, might move the value assigned to a life-at-income-y0 from, say, $250 to, say, $5k, which -- taken as an indication of how desperately important money is to someone so poor, rather than of the absolute value of their life -- is at least semi-reasonable. But it won't do anything to change the fact that someone sufficiently poor will get zero or negative utility.)

The assumption I would suggest revisiting is the one that says, roughly, that death is like merely not-having-lived in terms of utility.

It seems to me entirely possible, and in fact probably right, that (1) quite a lot of people's lives are bad enough that if we were choosing, godlike, between two possible worlds that differ simply in the addition or subtraction of some of those lives, we could reasonably prefer there to be fewer rather than more of them, but also that (2) once one of those lives is there, ending it is a very bad thing. A life just barely bad enough that the person living it considers death an improvement is probably quite a lot worse than a life just barely bad enough that adding another to the world is neutral.

(Of course quality of life isn't the same thing as income, but that's just a matter of the toy model being used here.)

So this would leave us with the following state of affairs: The life of a rather miserably-off person (for which very low income is a kinda-passable proxy) is bad enough that having more such lives in the world doesn't, as such, improve the world. (So they would have U=0 or even U<0.) But, once that life is there, taking it away or failing to save it is still a very bad thing (because of that person's preferences, and the impact on other people). That seems fair enough. But at this point it's worth noting that those value-of-life estimates are all concerned with the value of saving the life, rather than that of having it exist in the first place. Which probably means that there's still something wrong with the calculation.

It's nearly 2am local time so I'll leave my thoughts in that rather half-baked form.

Comment author: ericyu3 16 February 2014 03:19:18AM 0 points [-]

Thanks for posting such a detailed response!

It didn’t occur to me to distinguish between Thing One and Thing Two, and you’re right that they’re qualitatively quite different, but it shouldn’t make too much of a difference quantitatively. This is because the Thing Two number is basically derived from Thing One estimates, except that everyone is assumed to have the same value-of-life as a “representative” person. Thing One studies do produce values in the range of $6M.

someone sufficiently poor will get zero or negative utility

In reality, very poor people do try to stay alive, so any model that assigns them negative utility is incorrect - it’s a good sanity check to verify that this isn’t the case. The model I gave in the post suffers from this problem. However, a model where utility becomes utility at low incomes is not necessarily incorrect! Since there’s a minimum income required for survival (actually a minimum consumption level, since other people can give you free stuff, but I’ll ignore the distinction since this is a toy model), very few of the observed poor people will have income smaller than that, since they would quickly die. As long as the zero-utility income level is well below this survival threshold, the model is consistent with the fact that very poor people don’t want to die.

Comment author: gjm 15 February 2014 03:26:26PM *  2 points [-]

At least one of us is very confused about pretty much everything here.

Since von Neumann–Morgenstern utility functions are invariant under affine transformations, we can [...]

Not if you're serious about total utilitarianism, which needs to be able to add up utilities and therefore looks quite different (at least when the number of lives can vary) as the constant term varies.

[EDITED to add: The issues below are because the mathematical typesetting got messed up in a way that made + signs disappear; they are not real mistakes and the error has now been fixed in the original post.]

This condition ensures that s is the utility at the minimum income.

I must be misunderstanding. You've written U = psf(y) where the condition in question is f(y0)=0. But this implies U=p.s.0=0 when y=y0. So s is not the utility at minimum income. In fact it looks to me as if s is simply a scaling factor applied to utilities, and as such is perfectly arbitrary.

Then, a little later, you go from VL = s f(y)/f'(y) to s = VL.f'(y) - f(y) but that's completely wrong; it should be s = VL.f'(y)/f(y), which in the log case says s = VL / y log(y/y0); in the case y=y0 this just says that VL=0 whatever s may be, which is not surprising since you deliberately chose to rescale your utilities to make it so ("we can define the utility of being dead as 0").

Comment author: ericyu3 15 February 2014 05:04:29PM 1 point [-]

Not if you're serious about total utilitarianism, which needs to be able to add up utilities and therefore looks quite different (at least when the number of lives can vary) as the constant term varies.

Sorry, I was unclear. I meant that the constant term cannot be determined from empirical studies alone, since it doesn't affect decision-making. Estimates of the "value of life" compare the utility change from a small change in income to the utility change from a small change in survival probability, and the point of my post was to extrapolate these to large changes (creating a new person at a very low income level).

The conclusions are unchanged when the utility of death is nonzero, as long as you only look at "changes" in total utility (and not total utility itself, which will be infinite). For example, if the utility of death is fixed at 1 and your utility is fixed at 2, then creating a copy of you would make total utility "2+2+[lots of others]+1+1+1+..." instead of "2+1+[lots of others]+1+1+1+..." and total utility would increase from infinity to infinity+1. Obviously this is ill-defined mathematically (which is why I set death to 0), but you can see that it still makes sense to talk about utility changes.

[math mistakes]

When Vladimir_Nesov changed the images, the plus signs weren't URL-encoded, so they all disappeared. It's supposed to be U = p*(s + f(y)) and VL = (s + f(y))/f'(y).

What can total utilitarians learn from empirical estimates of the value of a statistical life?

1 ericyu3 15 February 2014 09:23AM

This post was inspired by Carl Shulman's blog post from last month—if you have time, read that first, since this is basically a response to it. My goal here is to combine

  1. Empirical studies of how much people are willing to pay to reduce their risk of death, and
  2. The "total utilitarian" assumption that potential people are as important as existing people, and the value of an additional person is independent of the number of preexisting people
  3. An additional (quite strong!) assumption that the utility gain from being born and becoming an adult is the same as the utility loss from a premature adult death
and see whether it's more effective for a total utilitarian to improve the incomes of existing people or to increase/decrease the total number of people.

Suppose everyone has identical preferences, and only two variables affect expected utility: their probability of survival  and their income . Since von Neumann–Morgenstern utility functions are invariant under affine transformations, we can define the utility of being dead as 0 and still have one degree of freedom left (two utility functions are equivalent iff they are related by a positive linear transformation). Fixing a reference (minimum) income level , we can always the write the utility function as

,

where  is some function defined on  with . This condition ensures that  is the utility at the minimum income. For instance, if utility from income is logarithmic, we can let . A logarithm with any other base can be turned into  by a linear transformation, so the choice of base doesn't matter.

We can infer  from empirical estimates of the value of a statistical life if we have a hypothesis for the form of —so total utilitarians should pay a lot of attention to these estimates! If you're willing to pay  for a small relative increase in your probability of survival,  (as opposed to an absolute increase ), then your value of life is defined as

.

If your utility from income takes the same form as  and you're rational, then it's also true that

.

In other words, the value of life is the marginal rate of substitution between income and log survival probability. So

and

.

In the case of , we have

.

$6 million is a reasonable estimate (although on the low side) for the value of a statistical life.  is in units of income, so the $6M estimate needs to be translated into an income stream. At an interest rate of 3% over 40 years, this will require payments of ~$257,582 per year. If the $6M estimate was for people making $50,000 a year, then . With  at $300 per year, this gives us . It's just a coincidence that  is so close to 0: slightly different parameters will shift  substantially away from that point. I biased all my parameter estimates (except for the interest rate, which I understand very poorly) so that  would have a downward bias, so if my estimates are wrong  is probably higher.

I'm not going to draw any conclusions about what a total utilitarian should do, since there are many problems with this method of estimation:

  • The value-of-statistical-life studies are from high-income countries, so it's questionable to extrapolate to very low incomes.
  • Utility from income probably isn't logarithmic, since people exhibit relative risk aversion.
  • The value of  depends strongly on the interest rate.
  • I assumed that somebody with $300 per year has the same life expectancy as someone with $50K per year. This isn't as big of a problem as it seems. If they live half as long, you can compare two $300/year people versus one $50K/year person and get a similar result.
The estimate of  is extremely sensitive to the inputs, partly because it's calculated as the difference of two much larger numbers (caused by the value-of-life estimates being from a high income level), partly because I don't know exactly what level of income value-of-life is calculated at, and partly because the relationship between annual income and a lump sum of money depends on the interest rate (I don't know what rates were used for the value-of-life estimates).

Any suggestions on how to make a more robust estimate?

(A big thanks to http://lwlatex.appspot.com/ for helping me format the equations!)
Comment author: Gunnar_Zarncke 08 November 2013 04:26:57PM 0 points [-]

I'm not entirely sure whether you

  • propose QALYs as a means to optimize a total utilitarians goals or
  • discuss what you as a total utilitarian could do best to optimize this goal or
  • something else related to both

I assume the former because you wrote

I chose them because I wanted the social welfare function to be relatively easy to calculate.

In general if you choose any oversimplified scheme to optimize for you will not earn what you want. What gets measured gets optimized.

The following quotes are from Wikipedia: http://en.wikipedia.org/wiki/Quality-adjusted_life_year

QALYs are empirically known to be oversimplified and more a theoretical economists tool to derive general optimization potential that a precise tool.

The four theoretical assumptions underlying QALYs are invalid (quality of life should be measured in consistent intervals; life years and QOL should be independent; people should be neutral about risk; and willingness to sacrifice life years should be constant over time).

They are neither recommended for individual health care decisions where they

place[s] disproportionate importance on physical pain or disability over mental health. The effects of a patient's health on the quality of life of others [..] do not figure into these calculations.

nor on the population as an aggregate where

the weight assigned to a particular condition can vary greatly, depending on the population being surveyed.

Also if you want to use it as a tool to personally rate some means you should consider that

those who do not suffer from the affliction in question will, on average, overestimate the detrimental effect on quality of life, compared to those who are afflicted.

So I propose that you choose a more elaborate tool set if you want to optimize a complex goal. Otherwise you fall into the same trap as you wat to avoid from UFAI: Overoptimizing oversimple goals.

Comment author: ericyu3 08 November 2013 11:30:28PM *  1 point [-]

I wanted a concrete discussion about how a total utilitarian (TU) should act, not one about what exactly their utility function should be. I think total QALYs are at least a better approxmiation of a TU social welfare function than other simple social welfare functions (life expectancy, GDP per capita, education, reported happiness, etc.), since they are all average measures. For all of these except happiness, you can construct a "total" version:

  • Life expectancy becomes total years lived,
  • GDP per capita becomes total GDP, and
  • Average education level becomes total years of education.

If you don't like how ambiguous QALYs are, you can use total years lived (QALYs without the quality adjustment) or total GDP as social welfare functions (although total GDP seems suspect because a TU might prefer two people living on, say, $10000 a year to one person living on $50000). The total number of adult years lived would also be a reasonable metric.

Basically, since the implied social welfare functions of most donors and charities seem very far from any reasonable TU social welfare function, even fairly oversimplified metrics can be much better than the status quo from a TU's perspective. In general, an effective altruist with unusual values has to worry less about oversimplifying, since even a crude social welfare function can be (from their perspective) much better than what people currently do.

How does the value placed on creating new lives versus improving existing ones affect how an effective altruist should act?

1 ericyu3 08 November 2013 02:48AM

I'm not familiar with the terminology used by people who talk about effective giving, so I'll be using a lot of scare quotes and this post will probably be very imprecise.

Say that you are a total utilitarian (meaning that you think doubling the number of lives is as good as doubling the utility of every life). Even if you believe that increasing the number of lives is good, it's not clear what the marginal cost of additional lives is (for you, given the options available to you). If it is high, then whether you are a total or average utilitarian doesn't matter much, except that a total utilitarian wouldn't want to fund family planning, education, etc. So the effectiveness of donations meant to increase "time lived" strongly affects how differently total-utilitarian and "typical" EAs should behave.

For concreteness, suppose someone are trying to maximize the total number of QALYs (quality-adjusted life years) lived in the next 100 years by all people who are alive at any point in this time interval - this is your "social welfare function" as an EA. A big limitation of this is that QALYs don't take non-health-related quality of life factors into account; I chose them because I wanted the social welfare function to be relatively easy to calculate.

People who donate to charities generally don't have population growth as an explicit goal, so at first it seems like a total utilitarian EA should act very differently. However, a lot of the most effective charities, as judged by GiveWell, are public health initiatives which greatly increase average QALYs. They probably increase total QALYs as well, although the effect of health improvements on fertility needs to be considered. Another issue is that due to the lack of donors, there aren't many (or any) charities that increase total QALYs in a cost-effective way. Public health initiatives that increase total QALYs probably don't do it in an optimal way, since the increase is just a side effect of their main goal of improved health.

How should the donations of a total-QALYs-maximizing EA differ from:

  1. Your donations if you were an EA acting according to your own values,
  2. How the typical person you know would donate if they were an EA,
  3. How an average-utilitarian EA would donate, and
  4. What GiveWell would recommend?

Also, I've read up about "earning to give," and it seems like a reasonable strategy for EAs who have values shared by many charities. Is this also reasonable for a total-QALYs-maximizing EA? JonahSinick thinks that EAs with "unusual values" might benefit more from earning to give, but this seems strange to me, since there's unlikely to be an effective charity working toward goals that few people share.

Comment author: ericyu3 30 December 2011 10:08:59PM 2 points [-]

I just wrote up my understanding of Solomonoff induction. Unfortunately it's a PDF, since I wanted to try out GNU TeXmacs. This might work for the "Binary Sequence Prediction" section:

http://www.ocf.berkeley.edu/~ehy/solomonoff_lightsaber.pdf

It's my first time doing this kind of thing, so please tell me how I can improve! Also, I tried to be careful, but I'm only a freshman, so I may have seriously misunderstood some things...

Comment author: ericyu3 01 January 2012 05:36:15AM 1 point [-]

Updated.

Comment author: ericyu3 31 December 2011 02:59:04AM *  4 points [-]

Hi! I'm Eric, a freshman at UC Berkeley. I've been lurking on Overcoming Bias/Less Wrong for a long time.

I had been reading OB before LW existed; I don't even remember when I started reading OB (maybe even before high school!). It's too long ago for me to remember clearly, but I think I found OB while I was reading about transhumanism, which I was very interested in. I still agree with the ideas of transhumanism, and I guess I would still identify myself as a transhumanist, but I don't actively read about it much anymore. I read LW less than I used to, but I'm starting to read it more now; LW and OB are basically the only transhumanist-related blogs that I still read.

I guess I like this site because I like things that are interesting and make me think; there are a lot of good and interesting ideas floating around here, and the quality of the posts and comments is excellent. I don't think I've encountered a single other site with such good comments. I like to say that I'm too curious and have too many interests; I spend a lot of time reading about things that interest me and I don't know what I want to do.

Why am I introducing myself and no longer lurking after all these years? I used to be really bad at expressing myself in writing: I wrote slowly and badly, and reading my old blog comments makes me cringe. I was good at reading, and at producing grammatically correct sentences, but I was terrible at actually using written language to get my ideas across. For example, just two years ago (in 11th grade), I scored 80/80 on the Writing Multiple Choice section of the SAT, but only 7/12 on the essay! Now, though, I find that I can write just fine (and I have no idea why this suddenly happened). So I'm finally introducing myself because I can finally write decent posts and comments :)

A quick question for more experienced LW commenters: I posted a comment (http://lesswrong.com/lw/8nr/intuitive_explanation_of_solomonoff_induction/5k5b) on an old, non-promoted post, and it didn't show up in the recent comments section. As a result, no one seems to have even seen it, and I don't know whether my addition was useful or not. How can I make these kinds of contributions visible in the future?

Comment author: ericyu3 30 December 2011 10:08:59PM 2 points [-]

I just wrote up my understanding of Solomonoff induction. Unfortunately it's a PDF, since I wanted to try out GNU TeXmacs. This might work for the "Binary Sequence Prediction" section:

http://www.ocf.berkeley.edu/~ehy/solomonoff_lightsaber.pdf

It's my first time doing this kind of thing, so please tell me how I can improve! Also, I tried to be careful, but I'm only a freshman, so I may have seriously misunderstood some things...

View more: Prev | Next