Comment author: Jiro 07 August 2014 08:24:45PM -3 points [-]

If you have one less child, the next generation would use one person's less worth of resources, and would therefore be able to support its population in marginally more comfort. Everyone else will have marginally more children because their children can be supported in marginally more comfort, which will wipe out most of the change from having one less child yourself.

Comment author: ericyu3 07 August 2014 11:22:30PM 0 points [-]

To clarify, I'm asking about "sex-ratio effects" (which are always important) and not "resource effects" (which only matter when reproduction is resource-limited).

Comment author: jimmy 21 June 2014 07:37:27PM *  2 points [-]

The example that comes to mind to show the how the sex thing isn't a problem is that of a robot car with a goal to drive as many miles as possible. Every day it will burn through all its fuel and fuel up. Right after it fuels up, it will have no desire for further fuel - more fuel simply does not help it go further at this point, and forcing it can be detrimental. Clearly not contradictory

You could have a similar situation with a couple wanting sex iff they haven't had sex in a day, or wanting an orange if you've just eaten an apple but wanting an apple if you've just eaten an orange.

To strictly show that something violates vNM axioms, you'd have to show that this behavior (in context) can't be fulfilling any preferences better than other options that the agent is aware of - or at least be able to argue that the revealed utility function is contrived and unlikely to hold up in other situations (not what the agent "really wants").

Constantly wanting what one doesn't have can have this defect. If I keep paying you to switch my apple for your orange and back (without actually eating either), then you have a decent case, if you're pretty confident I'm not actually fulfilling my desire to troll you ;)

The "want's a relationship when single" and "wants to be single when not" thing does look like such a violation to me. If you let him flip flop as often as he desires, he's not going to end up happily endorsing his past actions. If you offered him a pill that would prevent him from flip flopping, he very well may take it. So there's a contradiction there.

To bring human-specific psychology into it, its not that his inherent desires are contradictory, but that he wants something like "freedom", which he doesn't know how to get in a relationship and something like "intimacy", which he doesn't know how to get while single. It's not that he want's intimacy when single and freedom when not, it's that he wants both always, but the unfulfilled need is the salient one.

Picture me standing on your left foot. "Oww! Get off my left foot!". Then I switch to the right "Ahh! Get off my right foot!". If you're not very quick and/or the pain is overwhelming, it might take you a few iterations to realize the situation you're in and to put the pain aside while you think of a way to get me off both feet (intimacy when single/freedom in a relationship). Or if you can't have that, it's another challenge to figure out what you want to do about it.

I wouldn't model you as "just VNM-irrational", even if your external behaviors are ineffective for everything you might want. I'd model you as "not knowing how to be VNM-rational in presence of strong pain(s)", and would expect you to start behaving more effectively when shown how.

(and that is what I find, although showing someone how to be more rational is not trivial and "here's a proof of the inconsistency of your actions now pick a side and stop feeling the desire for the other side" is almost never sufficient. You have to be able to model the specific way that they're stuck and meet them there)

tl;dr: We're not VNM-rational because we don't know how to be, not because it's not something we're trying to do.

Comment author: ericyu3 30 July 2014 12:15:16AM 2 points [-]

How do you distinguish his preferences being irrationally inconsistent (he is worse off from entering and leaving relationships repeatedly) from him truly wanting to be in relationships periodically (like how it's rational to alternate between sleeping and waking rather than always doing one or the other)?

If there's a pill that can make him stop switching (but doesn't change his preferences), one of two things will happen: either he'll never be in a relationship (prevented from entering), or he'll stay in his current relationship forever (prevented from leaving). I wouldn't be surprised if he dislikes both of the outcomes and decides not to take the pill.

The pill could instead change his preferences so that he no longer wants to flip-flop, but this argument seems too general - why not just give him a pill that makes him like everything much more than he does now? If my behavior is irrational, I should be able to make myself better off simply by changing my behavior, without having to modify my preferences.

Comment author: [deleted] 18 June 2014 03:15:24AM 1 point [-]

Can you give me an example of this in reality? The math works, but I notice I am still confused, in that values should not just be a variable in utility function... they should in fact change the utility function itself.

If they're relegated to a variable, that seems to go against the original stated goal of wanting moral progress. in which case the utility function originally was constructed wrong.

Comment author: ericyu3 18 June 2014 06:58:22AM 0 points [-]

Define the "partial utility function" as how utility changes with x holding c constant (i.e. U(x) at a particular value of c). Changes in values change this partial utility function, but they never change the full utility function U(c,x). A real-world example: if you prefer to vote for the candidate that gets the most votes, then your vote will depend strongly on the other voters' values, but this preference can still be represented by a single, unchanging utility function.

I don't understand your second paragraph - why would having values as a variable be bad? It's certainly possible to change the utility function, but AlexMennen's point was the future values could still be taken into account even with a static utility function. If the utility function is constant and also depends on current values, then it needs to values into account as an argument (i.e. a variable).

Comment author: [deleted] 18 June 2014 01:59:06AM 2 points [-]

How are you changing the values you optimize for without changing your utility function? This now seems even more handwavey to me.

Comment author: ericyu3 18 June 2014 02:31:09AM *  0 points [-]

Consider a very simple model where the world has just two variables, represented by real numbers: cultural values (c) and the other variable (x). Our utility function is U(c, x)=c*x, which is clearly constant over time. However, our preferred value of x will strongly depend on cultural values: if c is negative, we want to minimize x, while if c is positive, we want to maximize x.

This model is so simple that it behaves quite strangely (e.g. it says you want to pick cultural values that view the current state of the world favorably), but it shows that by adding complexity to your utility function, you can make it depend on many things without actually changing over time.

In response to Links!
Comment author: ericyu3 08 June 2014 08:13:44PM 0 points [-]

There's a new subreddit for GIFs of cute baby elephants: http://www.reddit.com/r/babyelephantgifs

Comment author: gjm 09 April 2014 01:42:30PM 0 points [-]
  1. Oh, I see. You're taking wage to be determined by production, which in turn is determined by population according to the Cobb-Douglas formula, and then asking "what's the optimal population?". Got it.

  2. Yup, better now.

So, anyway, now that I understand your argument better, there's something that looks both important and wrong, but maybe I'm misunderstanding. You're assuming that A -- the constant factor in the Cobb-Douglas formula -- is the same for all countries. But surely it isn't, and surely this accounts for a large amount of the variation in productivity and wealth between countries. It seems like this would lead to big differences in w between countries even if they're all close to optimal population.

Comment author: ericyu3 10 April 2014 08:47:44AM *  0 points [-]

The A factor drops out of the final expression for the optimal wage. If the form of the production function is the same between two countries, their optimal wages will be the same as well. However, their optimal populations will obviously be different. For example, if country 1 has 10 times higher A than country 2, but their values of alpha are the same, then their optimum wages are the same, but country 1's optimum population is higher by a factor of 10^(1/(1-alpha)).

Here, A lumps together productivity and the amount of land a country has (so that a large poor country may have higher A than a small rich one). Obviously, increasing A will increase welfare, but it won't change the optimal wage (if the country is above that level already, increasing A will bring wages further away from the optimum) - the best thing to do (according to this model) is to increase A as much as possible, and also adjust the population level to match the optimal wage.

Comment author: gjm 05 April 2014 10:25:07PM 3 points [-]

There are several things here I fail to understand.

  1. Why d/dN? If you're looking for optimal income per capita, you need d/dw=0 not d/dN=0.

  2. The result you've allegedly reached is that w = w0 exp(alpha-1) where alpha<1, which means w<w0, which means you're not actually in the regime where net utility equals N[U(w)-U(w0)], so you've been doing calculus on the wrong formulae.

  3. Clearly utility is not only a function of income. (Even considering only money, you need to consider assets as well as income.) Of course considering only income is a handy simplification that may turn something impossibly complicated into something susceptible to analysis, but I think you should be explicit about making that simplification because the importance of things other than money is actually a pretty big deal.

  4. This all seems like a more complicated but still minor variation on simple and familiar observations like these: (a) simple versions of utilitarianism say well-off people should give almost all they have to poorer people; (b) simple versions of average utilitarianism say we should kill all the least happy people; (c) simple versions of total utilitarianism say we should prefer an enormous population of people with just-better-than-nothing lives to a normal-sized population of very happy people. I would expect solutions to (or bullet-biting on) these problems to deal with the more complicated but similarly counterintuitive conclusions presented here (assuming for the sake of argument that either my objections above are wrong or else the conclusions remain when the errors are repaired).

Comment author: ericyu3 07 April 2014 04:04:13PM *  0 points [-]
  1. I was unclear there - I'm finding the optimal wage at the optimal population level, not the maximum possible wage.
  2. Whoops, I meant 1-alpha. Fixed.
  3. Non-income factors are important, but I didn't consider them here because they're less obviously related to the population level.
  4. I was trying to say that even taking resource constraints, the critical income and the optimal income don't differ by that much compared to how much countries currently differ in income. Critical-level utilitarianism is supposed to be a "compromise" between total and average utilitarianism, but it would still yield strange conclusions in today's world.
Comment author: gjm 16 February 2014 01:48:08AM 0 points [-]

The two links you give to discussions of "the statistical value of a life" are discussing very different things. Thing One: An extrapolation (from infinitesimal changes to the change from p=0 to p=1) of the dollar-value to a given individual of their survival. Thing Two: An estimate of the dollar-value placed by society on a person's survival.

Thing One (which is what your VL is measuring) is inevitably going to be very sensitive to the person's wealth. Thing Two needn't be, and in fact isn't (most modern societies are willing to go to about as much trouble to save a poor person's life as a rich person's). I think the $6M figure you cite is more a Thing Two than a Thing One.

If we take your calculations at face value, here is what they tell us. We start with a broadly-plausible estimate that in some sense a life is worth about $6M. We suppose that a "typical" life corresponds to an income of about $50k/year. We do some calculations. And we arrive at the conclusion that the life of a very poor person -- someone whose income is your y0 of $300/year -- is worth something on the order of $250. (!!!)

First reaction: This is a reductio ad absurdum: something must be desperately wrong here. Second reaction: Well, maybe not so much; this is not really about assigning different values to rich and poor people's lives, but about how they, in their very different financial situations, convert between utility and money. Third reaction: No, wait, this really is about assigning different values to these people's lives; in particular there is an income level (not very far from y0, in this particular model) at which the utility reaches zero, and no talk of conversion factors will change that.

So I think you either need to bite the bullet and say that very poor people's lives aren't worth saving, or reconsider some assumptions. (Fiddling with the details of the utility function, etc., as in your closing comments, might move the value assigned to a life-at-income-y0 from, say, $250 to, say, $5k, which -- taken as an indication of how desperately important money is to someone so poor, rather than of the absolute value of their life -- is at least semi-reasonable. But it won't do anything to change the fact that someone sufficiently poor will get zero or negative utility.)

The assumption I would suggest revisiting is the one that says, roughly, that death is like merely not-having-lived in terms of utility.

It seems to me entirely possible, and in fact probably right, that (1) quite a lot of people's lives are bad enough that if we were choosing, godlike, between two possible worlds that differ simply in the addition or subtraction of some of those lives, we could reasonably prefer there to be fewer rather than more of them, but also that (2) once one of those lives is there, ending it is a very bad thing. A life just barely bad enough that the person living it considers death an improvement is probably quite a lot worse than a life just barely bad enough that adding another to the world is neutral.

(Of course quality of life isn't the same thing as income, but that's just a matter of the toy model being used here.)

So this would leave us with the following state of affairs: The life of a rather miserably-off person (for which very low income is a kinda-passable proxy) is bad enough that having more such lives in the world doesn't, as such, improve the world. (So they would have U=0 or even U<0.) But, once that life is there, taking it away or failing to save it is still a very bad thing (because of that person's preferences, and the impact on other people). That seems fair enough. But at this point it's worth noting that those value-of-life estimates are all concerned with the value of saving the life, rather than that of having it exist in the first place. Which probably means that there's still something wrong with the calculation.

It's nearly 2am local time so I'll leave my thoughts in that rather half-baked form.

Comment author: ericyu3 16 February 2014 03:19:18AM 0 points [-]

Thanks for posting such a detailed response!

It didn’t occur to me to distinguish between Thing One and Thing Two, and you’re right that they’re qualitatively quite different, but it shouldn’t make too much of a difference quantitatively. This is because the Thing Two number is basically derived from Thing One estimates, except that everyone is assumed to have the same value-of-life as a “representative” person. Thing One studies do produce values in the range of $6M.

someone sufficiently poor will get zero or negative utility

In reality, very poor people do try to stay alive, so any model that assigns them negative utility is incorrect - it’s a good sanity check to verify that this isn’t the case. The model I gave in the post suffers from this problem. However, a model where utility becomes utility at low incomes is not necessarily incorrect! Since there’s a minimum income required for survival (actually a minimum consumption level, since other people can give you free stuff, but I’ll ignore the distinction since this is a toy model), very few of the observed poor people will have income smaller than that, since they would quickly die. As long as the zero-utility income level is well below this survival threshold, the model is consistent with the fact that very poor people don’t want to die.

Comment author: gjm 15 February 2014 03:26:26PM *  2 points [-]

At least one of us is very confused about pretty much everything here.

Since von Neumann–Morgenstern utility functions are invariant under affine transformations, we can [...]

Not if you're serious about total utilitarianism, which needs to be able to add up utilities and therefore looks quite different (at least when the number of lives can vary) as the constant term varies.

[EDITED to add: The issues below are because the mathematical typesetting got messed up in a way that made + signs disappear; they are not real mistakes and the error has now been fixed in the original post.]

This condition ensures that s is the utility at the minimum income.

I must be misunderstanding. You've written U = psf(y) where the condition in question is f(y0)=0. But this implies U=p.s.0=0 when y=y0. So s is not the utility at minimum income. In fact it looks to me as if s is simply a scaling factor applied to utilities, and as such is perfectly arbitrary.

Then, a little later, you go from VL = s f(y)/f'(y) to s = VL.f'(y) - f(y) but that's completely wrong; it should be s = VL.f'(y)/f(y), which in the log case says s = VL / y log(y/y0); in the case y=y0 this just says that VL=0 whatever s may be, which is not surprising since you deliberately chose to rescale your utilities to make it so ("we can define the utility of being dead as 0").

Comment author: ericyu3 15 February 2014 05:04:29PM 1 point [-]

Not if you're serious about total utilitarianism, which needs to be able to add up utilities and therefore looks quite different (at least when the number of lives can vary) as the constant term varies.

Sorry, I was unclear. I meant that the constant term cannot be determined from empirical studies alone, since it doesn't affect decision-making. Estimates of the "value of life" compare the utility change from a small change in income to the utility change from a small change in survival probability, and the point of my post was to extrapolate these to large changes (creating a new person at a very low income level).

The conclusions are unchanged when the utility of death is nonzero, as long as you only look at "changes" in total utility (and not total utility itself, which will be infinite). For example, if the utility of death is fixed at 1 and your utility is fixed at 2, then creating a copy of you would make total utility "2+2+[lots of others]+1+1+1+..." instead of "2+1+[lots of others]+1+1+1+..." and total utility would increase from infinity to infinity+1. Obviously this is ill-defined mathematically (which is why I set death to 0), but you can see that it still makes sense to talk about utility changes.

[math mistakes]

When Vladimir_Nesov changed the images, the plus signs weren't URL-encoded, so they all disappeared. It's supposed to be U = p*(s + f(y)) and VL = (s + f(y))/f'(y).

Comment author: Gunnar_Zarncke 08 November 2013 04:26:57PM 0 points [-]

I'm not entirely sure whether you

  • propose QALYs as a means to optimize a total utilitarians goals or
  • discuss what you as a total utilitarian could do best to optimize this goal or
  • something else related to both

I assume the former because you wrote

I chose them because I wanted the social welfare function to be relatively easy to calculate.

In general if you choose any oversimplified scheme to optimize for you will not earn what you want. What gets measured gets optimized.

The following quotes are from Wikipedia: http://en.wikipedia.org/wiki/Quality-adjusted_life_year

QALYs are empirically known to be oversimplified and more a theoretical economists tool to derive general optimization potential that a precise tool.

The four theoretical assumptions underlying QALYs are invalid (quality of life should be measured in consistent intervals; life years and QOL should be independent; people should be neutral about risk; and willingness to sacrifice life years should be constant over time).

They are neither recommended for individual health care decisions where they

place[s] disproportionate importance on physical pain or disability over mental health. The effects of a patient's health on the quality of life of others [..] do not figure into these calculations.

nor on the population as an aggregate where

the weight assigned to a particular condition can vary greatly, depending on the population being surveyed.

Also if you want to use it as a tool to personally rate some means you should consider that

those who do not suffer from the affliction in question will, on average, overestimate the detrimental effect on quality of life, compared to those who are afflicted.

So I propose that you choose a more elaborate tool set if you want to optimize a complex goal. Otherwise you fall into the same trap as you wat to avoid from UFAI: Overoptimizing oversimple goals.

Comment author: ericyu3 08 November 2013 11:30:28PM *  1 point [-]

I wanted a concrete discussion about how a total utilitarian (TU) should act, not one about what exactly their utility function should be. I think total QALYs are at least a better approxmiation of a TU social welfare function than other simple social welfare functions (life expectancy, GDP per capita, education, reported happiness, etc.), since they are all average measures. For all of these except happiness, you can construct a "total" version:

  • Life expectancy becomes total years lived,
  • GDP per capita becomes total GDP, and
  • Average education level becomes total years of education.

If you don't like how ambiguous QALYs are, you can use total years lived (QALYs without the quality adjustment) or total GDP as social welfare functions (although total GDP seems suspect because a TU might prefer two people living on, say, $10000 a year to one person living on $50000). The total number of adult years lived would also be a reasonable metric.

Basically, since the implied social welfare functions of most donors and charities seem very far from any reasonable TU social welfare function, even fairly oversimplified metrics can be much better than the status quo from a TU's perspective. In general, an effective altruist with unusual values has to worry less about oversimplifying, since even a crude social welfare function can be (from their perspective) much better than what people currently do.

View more: Prev | Next