In connection to existential risk and the utility of bringing future people into being as compared with the utility of protecting those currently alive, I’ve been looking into the issues and paradoxes present in the ethics of potential persons. This has led to an observation that I can find no record of anyone else making, which may help explain why those issues and paradoxes arise. For some time all I had was the observation, but a few days ago an actual prescriptive rule came together. This got long however so for the sake of readers I’ll make a post about the normative rule later.
A dichotomy in utilitarianism exists between total utilitarianism and average utilitarianism, one suggesting that the greatest good comes from the highest total sum of utility and the other suggesting the greatest good comes from the highest utility per capita. These can come to heads when discussing potential persons as the total view holds we are obligated to bring new people into existence if they will have worthwhile lives and won’t detract from others’ wellbeing, and the average view suggests that it is perfectly acceptable not to.
Both the total and average utilitarian views have surprising implications. Default total utilitarianism leads to what Derek Parfit and others call “The Repugnant Conclusion”: For any population in which people enjoy very high welfare there is an outcome in which [a much larger group of] people enjoy very low welfare which is preferable, all other things being equal. On the other hand average utilitarianism suggests that in a population of individuals possessed of very high utility it would be unethical to bring another person into being if they enjoyed positive but less than average utility. There are some attempts to resolve these oddities which are not explained here. From my reading I came across few professional philosophers or ethicists fully satisfied with [any such attempt]( http://plato.stanford.edu/entries/repugnant-conclusion/#EigWayDeaRepCon) (without rejecting one of the views of utilitarianism).
To explain my observation I will make the assumptions that an ethical decision should be measured with reference to the people or beings it affects, and that actions do not affect nonexistent entities (assumptions which seem relatively widespread and I hope are considered reasonable). Assuming a negligible discount rate, if a decision affects our neighbors now or our descendants a thousand years hence we should include its effect upon them when deciding whether to take that action. It is when we consider actions that bring people into existence that the difficulty presents itself. If we choose to bring into existence a population possessed of positive welfare, we should consider our effect upon that then-existing population (a positive experience). If we choose not to bring into existence that population, we should judge this action only with regards to how it affects the people existing in that world, which does not include the unrealized people (assuming that we can even refer to an unrealized person). Under these assumptions we can observe that the metric by which our decision is measured changes with relation to the decision we make!
By analogy assume you are considering organizing a local swim meet in which you also plan to compete, and at which there will be a panel of judges to score diving. Will you receive a higher score from the panel of judges if you call together the swim meet than if you do not? (To work as an analogy this requires that one considers “the panel” to only exist when serving as the panel, and not being merely the group of judges.)
Without making this observation that the decision changes the metric by which the decision is measured, one will try to apply a single metric to both outcomes and find themselves in surprising implications and confusing statements. In his paper “The Person Affecting Restriction, Comparativism, and the Moral Status of Potential People”, (http://people.su.se/~guarr/) Gustaf Arrhenius quotes John Broome as saying:
“…[I]t cannot ever be true that it is better for a person that she lives than that she should never have lived at all. If it were better for a person that she lives than that she should never have lived at all, then if she had never lived at all, that would have been worse for her than if she had lived. But if she had never lived at all, there would have been no her for it to be worse for, so it could not have been worse for her.” (My apologies for not yet having time to read Broome’s work itself, I spend all my time attempting to prevent existential disaster and other activities seemed more pressing. Not reading Broome’s work may well be a fault I should correct, but it wasn’t sacrificed in order to watch another episode of Weeds.)
The error here is that Broome passes over to another metric without seeming to notice. From the situation where she lives and enjoys life, it would be worse for her to have never lived. That is, now that she can consider anything, she can consider a world in which she does not exist as less preferable. In the situation in which she never lived and can consider nothing, she cannot consider it worse that she never lived. When we change from considering one situation to the other, our metric changes along with the situation.
Likewise Arrhenius fails to make this observation, and approaches the situation with the strategy of comparing uniquely realizable people (who would be brought into existence by our actions) and non-uniquely realizable people. In two different populations with subpopulations that only exist in one population or the other, he correctly points out the difficulty of comparing the wellbeing of those subpopulations between the two situations. However he then goes on to say that we cannot make any comparison in their wellbeing between the situations. A subtle point, but the difficulty lies not in there being no comparison of their wellbeing, but in there being too many comparisons of their wellbeing, the 2 conflicting comparisons depending on whether they do or do not come to exist.
As long as the populations are a fixed, unchangeable size and our metric constant, both the total utilitarian view and the average utilitarian view are in agreement: maximizing the average and maximizing the total become one and the same. In this situation we may not even find reason to distinguish the two views. However in regards to the difficulty of potential persons and changing metrics, both views strive to apply a constant metric to both situations; total utilitarianism uses the metric of the situation in which new people are realized, and average utilitarianism is perhaps interpretable as using the metric in which the new people are not realized.
The seeming popularity of the total utilitarian view in regards to potential persons might be due to the fact that an application of that view increases utility by the its own metric (happy realized people are happy they were realized), while an application of the metric of the situation in which people are unrealized creates no change in utility (unrealized people are neither happy nor unhappy [nor even neutral!] about not being realized). This gives the appearance of suggesting we espouse total utilitarianism as in a comparison between increased utility and effectively unchanged utility, an increased utility seems preferable, but I am not convinced such a meta-comparison actually avoids applying one metric to both situations. Again, if we bring people of positive welfare into the world it is a preferable thing to have done so, but if we do not bring them into the world it causes no harm whatsoever to not have done so. My personal beliefs do not support the idea of unrealized people being unhappy about being unrealized, though we might note in the unrealized people situation a decreased utility experienced by total utilitarians unhappy with the outcome.
I suggest that we apply the metric of whichever situation comes to be. One oddity of this is the seeming implication that once you’ve killed someone they no longer exist or care, and thus your action is not unethical. If we take a preference utilitarian view and also assume that you are alive at the time you are murdered, we can resolve this by pointing out that the act of murder frustrates your preferences and can be considered unethical, and that it is impossible to kill someone when they are already dead and have no preferences. In contrast if we choose to not realize a potential person, at no point did they develop preferences that we frustrated.
Regardless, merely valuing the situation from the metric of the situation that comes to be tells us nothing about which situation we ought to bring about. As I mentioned previously I now have an idea for a potential rule, but that will follow in a separate post.
(A second though distinct argument for the difficulty or impossibility of making a fully sensible prescription in the case of future persons is present in Narveson, J. “Utilitarianism and New Generations.” Mind 78 (1967):62-72, if you can manage to track it down. I had to get it from my campus library.)
(ETA: I've now posted my suggestion for a normative rule.)
I can't quite tell, but it seems like you might be heading towards a "population-relative betterness" approach of the sort put forward by Dasgupta (1994). He ends up advocating a two-stage decision procedure. In stage one, you divide the options into sets that have the same population (which ensures that they are comparable, or "in the same metric" in your terms), and identify the best option in each set. In the second, you decide between the remaining options according to what is best for (or in the metric of) the current decision-making population.
Although it is not without problems, I am sympathetic to this approach. Broome seems to be too, but ends up arguing that it doesn't quite work. Parts of the argument can be found in his 1996 article "The Welfare Economics of Population" but he expands on these ideas more (and presents an alternative view) in more detail in Weighing Lives.
I can't possibly do Broome's argument justice (not least because I may be remembering it incorrectly); but part of the argument is that is that there is in fact a more "universal" metric that allows us to compare the value of non-existence to existence-with-a-given-level-of-utility (thus denying your statement above). Very roughly, Broome argues that rather than it not being possible to compare existence to non-existence, such comparisons are vague.
What emerges from this is something like a critical-level utilitarianism, where people should be brought into existence if they will have utility above a certain level. (As has already been alluded to, total utilitarianism and average utilitarianism are special cases of this where the critical level is set to zero or the current average respectively. But as this makes clear, these reflect only a tiny part of the possible space of approaches.)
Refs:
ETA: Posted this before I saw your actual proposal. It's now clear this wasn't quite where you were headed. I'd still be interested to see what you think of it though.
This is exactly the sort of thing I'm interested to find, thanks very much for pointing it out! I'll pick up a copy of weighing lives next week.
I posted my idea for a normative rule, and indeed it is similar, though it seems to work in reverse. I'm also seeing issues with what I imagine Dasgupta's idea to be that my strategy doesn't have, but I can't say more till I get a chance to read the arguments and counterarguments.