Under these assumptions we can observe that the metric by which our decision is measured changes with relation to the decision we make!
I can't quite tell, but it seems like you might be heading towards a "population-relative betterness" approach of the sort put forward by Dasgupta (1994). He ends up advocating a two-stage decision procedure. In stage one, you divide the options into sets that have the same population (which ensures that they are comparable, or "in the same metric" in your terms), and identify the best option in each set. In the second, you decide between the remaining options according to what is best for (or in the metric of) the current decision-making population.
Although it is not without problems, I am sympathetic to this approach. Broome seems to be too, but ends up arguing that it doesn't quite work. Parts of the argument can be found in his 1996 article "The Welfare Economics of Population" but he expands on these ideas more (and presents an alternative view) in more detail in Weighing Lives.
I can't possibly do Broome's argument justice (not least because I may be remembering it incorrectly); but part of the argument is that is that there is in fact a more "universal" metric that allows us to compare the value of non-existence to existence-with-a-given-level-of-utility (thus denying your statement above). Very roughly, Broome argues that rather than it not being possible to compare existence to non-existence, such comparisons are vague.
What emerges from this is something like a critical-level utilitarianism, where people should be brought into existence if they will have utility above a certain level. (As has already been alluded to, total utilitarianism and average utilitarianism are special cases of this where the critical level is set to zero or the current average respectively. But as this makes clear, these reflect only a tiny part of the possible space of approaches.)
Refs:
ETA: Posted this before I saw your actual proposal. It's now clear this wasn't quite where you were headed. I'd still be interested to see what you think of it though.
This is exactly the sort of thing I'm interested to find, thanks very much for pointing it out! I'll pick up a copy of weighing lives next week.
I posted my idea for a normative rule, and indeed it is similar, though it seems to work in reverse. I'm also seeing issues with what I imagine Dasgupta's idea to be that my strategy doesn't have, but I can't say more till I get a chance to read the arguments and counterarguments.
Let me know if you find any major intuitions that you think it runs afoul of. If I remember correctly Broome's take was essentially that it gives the right results, but that the rationale underlying it is suspect.
I was eventually seduced into reading the whole thing after some skimming, but it was a close thing for a while. I think this piece could be shorter (footnotes if you can't lose anything), even though the writing is solid already.
I like your approach.
Vulnerable points:
1) What it means to "average" or "sum" the utility functions of individuals needs definition. The way in which individual utilities are made comparable isn't obvious to me. I feel like we either have to declare some popular components of utility as the basis for normalization (define some common ground), or provide a framework where individuals can consciously elect any utility function (we believe what they say, because we have no perfect lie-detector) while providing a combination-of-utilities that can't be gamed by lying (this is probably impossible).
2) "average utilitarianism is perhaps interpretable as using the metric in which the new people are not realized" - as you know, it's only like that when the new people will have the same happiness on average. But it's certainly more like what you say, when compared to total, than not.
3) Reasoning about recently killed people based on your instructions does seem to require care or at least hand-waving :)
4) I think you're saying that if we currently expect to have a certain demographic of extant humans N years in the future, then we should weigh what we expect their utility to be in our decisions now (with some discount, considering them equally with living people). I guess you'd say that this should change my decisions (or at least my vote in our joint utility-maximization) even if I don't expect to personally reproduce. But if we decide to embark on a course that will change that demographic (e.g. measure that decrease birth rate as a side effect), then we no longer need to consider any utility for the now-not-expected-to-exist population. This actually makes sense to me, in a "you break it, you buy it" sort of way.
4a) Assuming I understand you right on 4, I feel (with no underlying formal justification) that if e.g. the Amish decide to reproduce such that we expect them to be half the population in 100 years, then the expected personal utility of that half of the population should be weighed at less than half of the 100 years from now population (i.e. less valuable per capita). This may just be my selfish genes (or anti-Amish bias!) speaking.
5) How is what you advocate not just average utilitarianism?
Thanks for the helpful feedback!
1) Yes how we measure utility is always an issue. Most papers I've read don't address it, working off the arguably fair assumption that somehow there is greater and less utility, and that anything in real life is just an approximation but you can still shoot for the former. Ideally we would just ask trustworthy people how happy or unhappy they are, or something similar. In practice and for prescribing behavior though I think we use the popular components approach, assuming most people like food and hate being tortured.
2) I'm slightly confused by this. Are you talking about bringing a large group of people into existence, with varying utilities? For simplicity I was discussing ideal theoretical cases, such as one child, or yes a new population all of roughly the same utility.
4) Yes that's essentially my point, though I haven't (I think) yet suggested how realization of these "decision-changed metrics" alters our decisions about potential people. But perhaps you meant simply that the wellbeing of someone who will exist should affect what we do.
4a) I would say that we should treat all people's utility equally once they come into being, which I think agrees with what you said. The last line about anti-Amish bias seems to run counter to that idea however.
5) Before a normative rule came to me, I was going to end this post with "lacking any prescriptive power however, we might default to total or average utilitarianism". Regardless, I've tried to keep this post merely descriptive. Though the rule I came up with is similar to average utilitarianism in ways, average utilitarianism has consequences as well I'm not happy with. For example, if there were 20 people with extremely high utility and 1000 with utility half that but still very good lives, as long as those 20 people didn't mind the slaughter of the thousand, the average approach seems to advocate killing the 1000 to bring up the average.
1) I wish we could do better.
2) I'm just agreeing that the "perhaps interpretable" is "definitely not the same as, except under certain assumptions", which you were well aware of.
4a) I had one too many negatives (bad edit). I was indeed making an anti-Amish suggestion. That is, to the extent that some group of people are committed to a massive future population, those that are personally intending to bring about a lower population level shouldn't necessarily be constrained in their decision making in favor of the profligate reproducers' spawn.
5) Please do continue with another post, then
It seems odd to me to value the utlity of the new Amish masses less than others', as no one is allowed to choose why they were brought into existence, or if. If we maintain a belief in an essential equality of moral worth between people, I think we would be constrained by the reproducer's offspring. Of course, I may not like that, but that's an issue to be brought up with the current Amish-spawners.
That's a reasonable suggestion. I certainly haven't complained about the teeming Amish masses before, so if I really care, I ought to first try to exert some influence now.
What it means to "average" or "sum" the utility functions of individuals needs definition.
meh. That's gone over well enough in the literature.
I don't think that if e.g. the Amish decide to reproduce such that we expect them to be half the population in 100 years, then the expected personal utility of that half of the population should be weighed at less than half of the 100 years from now population (i.e. less valuable per capita). This may just be my selfish genes (or anti-Amish bias!) speaking.
This didn't make sense to me. Did you forget that you were making a negative statement?
That's gone over well enough in the literature.
It's hard to have a sensible conversation about it without the definition, though.
This didn't make sense to me. Did you forget that you were making a negative statement?
Yep, Franken-edit. I've removed the extra negative for posterity.
Why are there only two types of utilitarianism mentioned? How about average-and-standard-deviation utilitarianism? Or even more mathematically sophisticated formulas (which I don't know how to make) that would classify whether people bunch or spread over the utility histogram and whether they bunch in one or multiple peaks?
No mention of anti-natalism? One of my favorite bloggers has had a series of essays on the topic. I forget if it ever finished, but you can find the first four from here. David Benatar has a book on it from a utilitarian perspective, which is discussed among others at the Hoover Hog. There are other blogs completely dedicated to anti-natalism.
Interesting, I hadn't heard of that. It does seem to require however that you think you yourself would have been better off not being born, or at least that most peole are. I'm personally extremely happy that I was born, and I think in a hypothetical future utopia, most people will be as well.
Total utilitarianism feels completely ridiculous to me.
For example existence of slaves (assuming they're not abused too much, just garden variety life of forced labour) must have non-negative utility to them - as indicated by them not committing mass suicide; and definitely positive utility to their owners - so total utility goes up with every slave as opposed to them not existing.
So a total utilitarian should definitely support creation of a new slave underclass - let's say by changing abortion laws to make abortion illegal, but having an option to sell your children that would otherwise get aborted into slavery. Isn't slavery much better than not existing due to getting aborted? Or we could even pay women to get pregnant and make more children to populate the new underclass - these people wouldn't even reach the embryo stage otherwise (the argument is not really related to abortion issues, we could have human cloning facilities etc., but this way is less scifi and more historically precedented).
Total utilitarianism is full of ridiculous consequences like that.
Average utilitarianism isn't - in modern civilization people benefit from existence of other people, so unexisting the unhappy ones would bring down utility of the happy ones. On the other hand in Malthusianish environments it makes perfect sense to unexist people, as utility externalities of a new person coming into existence is significantly negative. Both cases agree with intuitions we might have.
Average utilitarianism isn't - in modern civilization people benefit from existence of other people, so unexisting the unhappy ones would bring down utility of the happy ones.
But there are slaves now. Given your objection to total utilitarianism, shouldn't you then advocate killing them all, as an average utilitarian? Would this really decrease the utility of most people, if most people never hear about it?
Killing and unexisting are different. If promoting birth control would somehow magically ensure the people with worst lives wouldn't be born, then average utilitarianism says we should be doing it.
Or as a simple proxy, promote birth control in poorest countries that cannot deal with the number of people they have now, but promote responsibly larger families (not "as many kids as possible") in countries that take more people and make them productive members of the modern civilization.
Near the end of the post, FrankAdamek notes the difference between killing someone and never bringing them into existence.
Killing current slaves is unacceptable to an average utilitarian. It would be acceptable to somehow ensure that no one is born into slavery in the future.
Killing current slaves is unacceptable to an average utilitarian.
Why?
If what makes an action right is overall average utility, and killing people makes that go up, then how can it be unacceptable, all else being equal?
I don't think anyone holds a concept of average-utility maximizing that would allow you to simply kill everyone below average. Indeed, such a maxim would be self-defeating: if society killed everyone below average happiness every week, almost everyone would die after a year, and, knowing this, everyone would be much more miserable at the start of the year than they would be otherwise.
The average v. total is principally relevant to non-existent entities, i.e. those who have not been born. Existing persons already have utility and preferences, so they can't be brushed aside like non-existent persons can, since the non-existent don't even not-care. In order for killing unhappy people to be justified, then killing the living would basically have to not generate disutility; i.e. once they're dead, they're irrelevant, which would say there's nothing wrong with murder beyond how it affects the survivors. I do not think this is a common view.
In order for killing unhappy people to be justified, then killing the living would basically have to not generate disutility
No, it would only have to generate less disutility than the victims were unhappy to start with. If everyone were an average utilitarian, and was overjoyed that we were raising the mean, this type of killing might even have positive externalities.
I think this suggests that total utilitarianism is a better system: the Repugnant Conclusion is a far-oft danger, whereas if we adopted average utilitarianism, we would be in immediate danger of massacres.
Of course, an alternative ethical system may be better still.
In general with all the discussion here of average vs total utilitarianism, in my perception both are well-meaning generally great solutions, but both do have their oddities, pursuant to applying the same mathematical measurement to both situations. Most of this discussion seems to be people just arguing over which oddity they prefer to accept, how you can discount those oddities, etc. But in both cases, it requires something more than the fundamental rule, saying "Yes let's consider the average except when that means killing*. That works for a given person but it seems more to be patching up theories that don't quite fit here, rather than using a theory that doesn't require a patch at all.
The error here is that Broome passes over to another metric without seeming to notice. From the situation where she lives and enjoys life, it would be worse for her to have never lived. That is, now that she can consider anything, she can consider a world in which she does not exist as less preferable. In the situation in which she never lived and can consider nothing, she cannot consider it worse that she never lived.
FWIW, I think this misinterprets Broome's argument.
As I understand it, the argument is not that she cannot prefer anything if she does not exist. The argument is that if she does not exist, her well-being is undefined - and that it is consequently impossible to compare this undefined well-being to whatever well-being she may have if she does exist. The latter point does not depend on whether we view things from the perspective where she lives and enjoys life or not.
I agree, as even the woman herself, once existing, cannot even then say that it would be worse for her to never have existed, as her wellbeing would then be undefined. These comments are very useful for helping me to refine my language. What I mean to say is that once she exists, she can then be happy or grateful that she exists, but can have no opinion if she didn't.
BTW, everything past "The error here is that Broome passes over to another metric without seeming to notice" is a statement of the view I'm arguing for, not a paraphrasing of Broome's argument. That's not nearly as clear as it could be however.
I suggest that we apply the metric of whichever situation comes to be.
I may be misunderstanding you, but the whole reason this mini-branch of ethical thought exists is for the purpose of evaluating future worlds in which the situation could go either way. That makes this solution less that satisfactory, to say the least.
I would say more specifically the whole reason this mini-brance of ethics exists is for the purpose of determining what we ought to do, and that evaluating the future worlds is just an (intuitive) approach to accomplishing that (and one I think has flaws).
One of the main arguments of my post is that it is impossible to have a single coherent evaluation of some situations, which is why I haven't proposed one. If you can find an error in my argument that there are two potentially conflicting evaluations, please discuss that specifically.
One of the main arguments of my post is that it is impossible to have a single coherent evaluation
The existence of two conflicting evaluations !=> the nonexistence of a single coherent evaluation.
Is your argument actually intended to prove the impossibility of a single coherent evaluation (in which case I think I've missed it) or merely an argument that e.g. Broome's approach does not constitute one?
Hmm...I have not set out to do something so rigorous as prove that no single coherent evaluation exists. My argument is just that I have yet to see one.
The existence of two conflicting evaluations !=> the nonexistence of a single coherent evaluation.
Could you say more on this point? I don't see how it holds, unless we have a rational way of discounting at least one conflicting evaluation, or otherwise prioritizing another.
My argument is just that I have yet to see one.
OK, cool. Just checking. ;)
As to the second issue, I think you've probably already conceded this. My point is really just that there's no reason the "proper" evaluative standard has to have anything to do with either of the conflicting ones. It could be something else entirely. For example, I think my comment here explains how one might parse the Broome passage in a way that allows a single coherent evaluation, despite the existence of the two conflicting perpectives you set out.
More generally, say we have a choice between two distributions of stuff: x = (2,8) and y = (8,3), where the first number is what I get, and the second number is what you get. From your perspective, x is best, while from my perspective y is best, so we have two conflicting evaluations. Nonetheless, most people would accept as coherent an evaluative perspective that prefers the option with the largest sum (which in this case favours y).
What point are you trying to make?
I think some of our confusion over these issues stems from our reluctance to admit how many of us have negative utility. If you suppose that average utility is zero, the practical problems with both total utilitarianism and average utilitarianism go away: It isn't ethical to replace the current population with a larger group of less-happy people, because that would decrease total utility; and it is is unethical to create more, less-happy people, but that's not a problem, because they would have negative utility and everybody will agree it's bad to create negative utility.
It is reasonable to suppose that average utility is zero, because "utility" has to do with our happiness, and humans evolved so that happiness is a function of the rate at which our degree of satisfaction of our goals is changing, rather than a function of how many of our goals are satisfied. This means that, unless things are getting better or worse for you on average over your entire life, your average happiness, and hence utility, will be near zero.
That's true to an extent, but I think there are work arounds, certain applications of Buddhism being one example.
A different way to phrase what I meant is: would those new people be grateful they were created, will they wish they hadn't been born, or feel neutral about the matter?
We can't even remotely predict how our actions will affect people in the far future, so these types of situations don't seem like much of a problem, other than for people playing intellectual games.
The distinctions between total and average utilitarianism disappear when you realistically ask the question "What do I have the most reason to do or want?" and note that you probably won't be successful in weighing anything other than the immediate outcome of your actions.
There might be rare exceptions for those making a big decision like whether to launch nuclear missiles.
Yes in general this is a fairly esoteric question. I had a very specific reason for considering it however, which I'll share with you.
What percentage of the 6.7 billion people on earth would it be moral to kill, say in a demonstration of the possibility of existential risk, in order to someday realize the eventual existence of 10^23 lives in the Virgo Supercluster?
If we consider the creation of new people with positive utility a moral imperative, it would seem that killing any number of today's people, even over 6 billion, would be justified to even marginally increase the chances of creating a trillion year galactic civilization. This doens't make sense to me, which is why I was looking into the issue.
If we consider the creation of new people with positive utility a moral imperative, it >would seem that killing any number of today's people, even over 6 billion, would be >justified to even marginally increase the chances of creating a trillion year galactic >civilization. This doens't make sense to me, which is why I was looking into the issue.
If you want to retain total utilitarianism but don't want this result you can always do what economists do and apply discounting. The justification being that people seem to discount future utility somewhat relative to present utility and not discounting leads to perverse results. If you use a discount rate of say 2%* per year then the utility of 10^23 people in 2500 years is equal to the utility of around 32 people today (10^23/1.02^2500 = 31.59) . Of course, if you think that the trillion year galactic civilization is just around the corner, or that the people then will have much higher utility than current people do that changes things somewhat.
*I picked that rate because I think it is about what was used in the Stern Review
Ah. These problems go away if you accept that humanity is stuck on Earth and doomed. Or if you aren't a utilitarian.
In connection to existential risk and the utility of bringing future people into being as compared with the utility of protecting those currently alive, I’ve been looking into the issues and paradoxes present in the ethics of potential persons. This has led to an observation that I can find no record of anyone else making, which may help explain why those issues and paradoxes arise. For some time all I had was the observation, but a few days ago an actual prescriptive rule came together. This got long however so for the sake of readers I’ll make a post about the normative rule later.
A dichotomy in utilitarianism exists between total utilitarianism and average utilitarianism, one suggesting that the greatest good comes from the highest total sum of utility and the other suggesting the greatest good comes from the highest utility per capita. These can come to heads when discussing potential persons as the total view holds we are obligated to bring new people into existence if they will have worthwhile lives and won’t detract from others’ wellbeing, and the average view suggests that it is perfectly acceptable not to.
Both the total and average utilitarian views have surprising implications. Default total utilitarianism leads to what Derek Parfit and others call “The Repugnant Conclusion”: For any population in which people enjoy very high welfare there is an outcome in which [a much larger group of] people enjoy very low welfare which is preferable, all other things being equal. On the other hand average utilitarianism suggests that in a population of individuals possessed of very high utility it would be unethical to bring another person into being if they enjoyed positive but less than average utility. There are some attempts to resolve these oddities which are not explained here. From my reading I came across few professional philosophers or ethicists fully satisfied with [any such attempt]( http://plato.stanford.edu/entries/repugnant-conclusion/#EigWayDeaRepCon) (without rejecting one of the views of utilitarianism).
To explain my observation I will make the assumptions that an ethical decision should be measured with reference to the people or beings it affects, and that actions do not affect nonexistent entities (assumptions which seem relatively widespread and I hope are considered reasonable). Assuming a negligible discount rate, if a decision affects our neighbors now or our descendants a thousand years hence we should include its effect upon them when deciding whether to take that action. It is when we consider actions that bring people into existence that the difficulty presents itself. If we choose to bring into existence a population possessed of positive welfare, we should consider our effect upon that then-existing population (a positive experience). If we choose not to bring into existence that population, we should judge this action only with regards to how it affects the people existing in that world, which does not include the unrealized people (assuming that we can even refer to an unrealized person). Under these assumptions we can observe that the metric by which our decision is measured changes with relation to the decision we make!
By analogy assume you are considering organizing a local swim meet in which you also plan to compete, and at which there will be a panel of judges to score diving. Will you receive a higher score from the panel of judges if you call together the swim meet than if you do not? (To work as an analogy this requires that one considers “the panel” to only exist when serving as the panel, and not being merely the group of judges.)
Without making this observation that the decision changes the metric by which the decision is measured, one will try to apply a single metric to both outcomes and find themselves in surprising implications and confusing statements. In his paper “The Person Affecting Restriction, Comparativism, and the Moral Status of Potential People”, (http://people.su.se/~guarr/) Gustaf Arrhenius quotes John Broome as saying:
“…[I]t cannot ever be true that it is better for a person that she lives than that she should never have lived at all. If it were better for a person that she lives than that she should never have lived at all, then if she had never lived at all, that would have been worse for her than if she had lived. But if she had never lived at all, there would have been no her for it to be worse for, so it could not have been worse for her.” (My apologies for not yet having time to read Broome’s work itself, I spend all my time attempting to prevent existential disaster and other activities seemed more pressing. Not reading Broome’s work may well be a fault I should correct, but it wasn’t sacrificed in order to watch another episode of Weeds.)
The error here is that Broome passes over to another metric without seeming to notice. From the situation where she lives and enjoys life, it would be worse for her to have never lived. That is, now that she can consider anything, she can consider a world in which she does not exist as less preferable. In the situation in which she never lived and can consider nothing, she cannot consider it worse that she never lived. When we change from considering one situation to the other, our metric changes along with the situation.
Likewise Arrhenius fails to make this observation, and approaches the situation with the strategy of comparing uniquely realizable people (who would be brought into existence by our actions) and non-uniquely realizable people. In two different populations with subpopulations that only exist in one population or the other, he correctly points out the difficulty of comparing the wellbeing of those subpopulations between the two situations. However he then goes on to say that we cannot make any comparison in their wellbeing between the situations. A subtle point, but the difficulty lies not in there being no comparison of their wellbeing, but in there being too many comparisons of their wellbeing, the 2 conflicting comparisons depending on whether they do or do not come to exist.
As long as the populations are a fixed, unchangeable size and our metric constant, both the total utilitarian view and the average utilitarian view are in agreement: maximizing the average and maximizing the total become one and the same. In this situation we may not even find reason to distinguish the two views. However in regards to the difficulty of potential persons and changing metrics, both views strive to apply a constant metric to both situations; total utilitarianism uses the metric of the situation in which new people are realized, and average utilitarianism is perhaps interpretable as using the metric in which the new people are not realized.
The seeming popularity of the total utilitarian view in regards to potential persons might be due to the fact that an application of that view increases utility by the its own metric (happy realized people are happy they were realized), while an application of the metric of the situation in which people are unrealized creates no change in utility (unrealized people are neither happy nor unhappy [nor even neutral!] about not being realized). This gives the appearance of suggesting we espouse total utilitarianism as in a comparison between increased utility and effectively unchanged utility, an increased utility seems preferable, but I am not convinced such a meta-comparison actually avoids applying one metric to both situations. Again, if we bring people of positive welfare into the world it is a preferable thing to have done so, but if we do not bring them into the world it causes no harm whatsoever to not have done so. My personal beliefs do not support the idea of unrealized people being unhappy about being unrealized, though we might note in the unrealized people situation a decreased utility experienced by total utilitarians unhappy with the outcome.
I suggest that we apply the metric of whichever situation comes to be. One oddity of this is the seeming implication that once you’ve killed someone they no longer exist or care, and thus your action is not unethical. If we take a preference utilitarian view and also assume that you are alive at the time you are murdered, we can resolve this by pointing out that the act of murder frustrates your preferences and can be considered unethical, and that it is impossible to kill someone when they are already dead and have no preferences. In contrast if we choose to not realize a potential person, at no point did they develop preferences that we frustrated.
Regardless, merely valuing the situation from the metric of the situation that comes to be tells us nothing about which situation we ought to bring about. As I mentioned previously I now have an idea for a potential rule, but that will follow in a separate post.
(A second though distinct argument for the difficulty or impossibility of making a fully sensible prescription in the case of future persons is present in Narveson, J. “Utilitarianism and New Generations.” Mind 78 (1967):62-72, if you can manage to track it down. I had to get it from my campus library.)
(ETA: I've now posted my suggestion for a normative rule.)