A couple more notes before I finally get started:
- The genre here is philosophy, and a common type of argument is the thought experiment: "If you had to choose between A and B, what would you choose?" (For example: "is it better to prevent one untimely death, or to allow 10 people to live who would otherwise never have been born?")
- It's common to react to questions like this with comments like "I don't really think that kind of choice comes up in real life; actually you can usually get both A and B if you do things right" or "actually A isn't possible; the underlying assumptions about how the world really works are off here." My general advice when considering philosophy is to avoid reactions like this and think about what you would do if you really had to make the choice that is being pointed at, even if you think thI ae author's underlying assumptions about why the choice exists are wrong. Similarly, if you find one part of an argument unconvincing, I suggest pretending you accept it for the rest of the piece anyway, to see whether the rest of the arguments would be compelling under that assumption.
- I often give an example of how one could face a choice between A and B in real life, to make it easier to imagine - but it's not feasible to give this example in enough detail and with enough defense to make it seem realistic to all readers, without a big distraction from the topic at hand.
- Philosophy requires some amount of suspending disbelief, because the goal is to ask questions about (for example) what you value, while isolating them from questions about what you believe. (For more on how it can be useful to separate values and beliefs, see Bayesian Mindset.)
I agree with these notes, and liked this post, but I think there's an extra caution that needs to be mentioned about these notes.
I think that to do this kind of a deliberation right, what one needs to do is something like:
I have a feeling that a lot of EA/rationalist thinking on this specific question does steps 1-3, but then doesn't really do the last step or leaves it implicit. This makes sense, since it's much easier to do the first three steps. For this question about population ethics, you can bring in relatively formal frameworks that let you do things like prove impossibility theorems, but then there's no similar formal tool that you could apply to answer the question of "so does this actually matter".
But I'm concerned that the end result is that people see a bunch of this rigorous reasoning and then conclude something like "the formal framework implies that extra lives are therefore as good as saving lives, so therefore EAs should prioritize creating lives just as much as saving lives". Rather than concluding "the formal framework implies that extra lives are therefore as good as saving lives, but we don't know how much weight we should give to this reasoning over other considerations, so therefore we still mostly remain confused". (Your post does say this at the end, but I'm worried that a lot of people will still draw the former conclusion.)
As an intentionally silly analogy, suppose that someone asked me "is it better for there to be green objects rather than red objects in the universe". And suppose that I agreed to consider this question in the abstract, philosophy-style. Further suppose that there was psychological research saying that green tends to make people slightly calmer and relaxed, whereas red was slightly distressing. And this research was really, really convincing and rigorous, so that I could actually prove without a doubt that if you have to pick between green and red objects, it's better for people's well-being for there to be green objects.
Now it might be a valid argument that, other things being even and if you have to choose between them, green objects are better. But even if that was true, it would obviously be silly to take this conclusion to imply anything about EA policy. EAs shouldn't make it a cause area to paint everything green rather than red. First, the difference in wellbeing just isn't big enough to care about, and two, this would conflict with a number of other considerations such as it being more aesthetic (and thus more conducive for well-being) if you can use a variety of colors.
In the case of green vs. red, this seems relatively obvious, since we can apply something like a quantitative framework to the broader question of "should green over red therefore be EA policy". We can say that yes, green is maybe slightly better, but the net impact is very small, and also it would have these other effects that would produce an overall reduction in wellbeing. Even if we can't actually literally do the math, the overall magnitude of the effects seems obvious enough.
Whereas in EA circles, there's a tendency in some circles to do the kind of analysis that your post does on the "creating vs. saving lives" question, and then conclude that this should be a major consideration for EA policy. (To be clear, I recognize that your post doesn't do that, and is suitably cautious about how much weight to put on these thoughts.) I think this might be committing a similar kind of error as analyzing green vs. red and then drawing substantial policy implications from it. The problem is just much harder to see, since we don't have a broader quantitative framework where we could formally ask questions like "what is the overall impact of these particular thought experiments on our general ethical considerations" in the same way as we can ask "what is the overall impact of favoring green over red on well-being in general". And since we don't have anything resembling such a framework, it's easy to not even notice that it's missing.
I didn't see the discussion I expected about the question in the title, so I shall provide it:
whether “extra lives lived” are as good as “deaths prevented”
Beyond counting lives, there are certain benefits to a world in which a higher proportion of untimely deaths are prevented. Fewer people will be hit with unexpected grief; parents can be more confident that their children will survive them; friends will lose fewer friends before old age; individuals can expect (more confidently) to live into old age and plan accordingly. I suspect there are knock-on effects of the form "fewer people get messed up by grief/orphanhood/etc., reducing the pain that messed-up people cause to others".
On the other hand, one could say there are benefits to having more lives lived even if they're plagued by more untimely deaths. More youngsters bringing in ideas; more rapid turnover of lifelong dictators; also consider Planck's "science progresses one funeral at a time". (Though, actually, the body of the post talks about having more lives lived via preventing existential risk, which strikes me as very different from "having a few more lives on the margin by e.g. persuading more people to become parents". For one thing, as outlined in the post, estimates of how many lives might be saved have error bars spanning many orders of magnitude; it's not really possible to do sane quantitative reasoning except maybe about the conservative lower bounds.) One can also argue that reducing child mortality so low has caused many parents to become hypersensitive to the remaining dangers and to over-shelter their kids.
On the other other hand, there are benefits to longer lives and longer careers. Depending on what stage of life the "untimely deaths" you're targeting occur in—many brilliant creators died in their 30s or even younger, and some others were seriously crippled by grief for a loved one. (And, of course, it's possible that preventing some of these deaths would have accelerated the progress of technology, which might factor into preventing more deaths, or into enabling more births, or both.)
One can make an argument for either, but I think the "not-just-life-count" benefits generally look like they make "saving existing lives" a better idea than "enabling more future lives". The question might then become "How much should one be preferred over the other? At what ratio?"
I see the notes to assume that the abstract choice is as stated, to avoid "actually, in real life" concerns, etc. I'm not sure if this is supposed to apply to most or all of the above considerations. If it is, then the question seems to me like "Is it better to save your children's lives or enable future births? Ignore the grief, disruption, failed hopes, etc. that would make you prefer to save your children's lives"—it's assuming away what may be the whole point. Which is a problem if you intend to then apply the conclusions to real-world decisions like where to donate.
[...] I think the "not-just-life-count" benefits generally look like they make "saving existing lives" a better idea than "enabling more future lives". The question might then become "How much should one be preferred over the other? At what ratio?"
[...] then the question seems to me like "Is it better to save your children's lives or enable future births? Ignore the grief, disruption, failed hopes, etc. that would make you prefer to save your children's lives"—it's assuming away what may be the whole point.
(Agreed!) I find it very counterintuitive how the standard framework of population ethics recommends that we ignore all the instrumental (or extrinsic / relational / non-independent) value of various lives and experiences.
After all, I would argue that our practical intuition is mostly tracking the positive roles of those things, which may in part explain our intuitive disagreement with thought experiments that attempt to draw sharp boundaries around the supposedly fundamental bits.
(I also explored this in the context of population ethics here. Those essays are framed in suffering-focused and minimalist terms respectively, but the main points seem applicable to all impartial consequentialist views, so perhaps people would find them useful more broadly.)
That you know of. But there may be some way of disentangling our confusions about this topic that leaves the anti-repugnant-conclusion intuition intact, and leaves mine intact too. I’m not really feeling the need to accept one wrong-seeming view just to avoid another one.
I like this reply by Non-Utilitarian Holden!
Self-identifying utilitarians seem to have prematurely restricted the option space in population ethics. The (moral realist) utilitarian approach to population ethics goes something like this. "Something must be intrinsically valuable (or at least disvaluable). That value dictates everyone’s choices if they're rational. If people want to do what's moral, it would be a grave mistake not to have strong and specific preferences about allocating every resource in our future lightcone."
It seems like there are other ways to conceptualize population ethics. We have to sit back and ask what we're even doing, what specific question population ethics is trying to answer.
Here are two alternative frameworks for population ethics that I both consider helpful:
(Those two frameworks can complement one another. )
The first framework is very similar to what common sense says about the ethics of having children. For instance, parents are free to have children or not have them, but they have specific duties toward them when they do have them.
The second framework (the "Unused Garden Analogy") gives rise to further distinctions:
Some kind of utilitarian perspective can play a role in the Unused Garden Analogy. For instance, someone may think that the "most altruistic setup" for how to use the garden has to be "utilitarianism, in spirit." Note that it couldn't be a moral realist version of utilitarianism because that would invalidate the entire analogy; it would declare war against all other perspectives. It would say that anyone who doesn't pick exactly the right answer is failing to do what's moral. If moral realism is right and a specific flavor of utilitarianism is the single correct moral theory, then how could anyone seriously propose using the garden for anything else but The One Correct Way To Use It? No, instead, people could have a garden preference that's like subjectivist utilitarianism. They cast a vote on how to use the garden, and for garden-specific purposes, that vote is a flavor of utilitarianism. However, by voting, they aren't expressing that everyone else's votes should be ignored.
Good exploration. I think more thought (and bullet-biting) is needed in terms of recognizing that it's well-known that intuitions about population ethics are self-contradictory AND leaning heavily on fairly naive intuitions in the debate between NUH and UH.
Personally, I'm nowhere near utilitarian - I don't believe in a consistent non-subjective valuation of other peoples' lives. I DO value other peoples' lives, but I recognize that this is my valuation, not any objective aggregate, and it's certainly nowhere near linear and not directly proportional to their self-evaluation.
My declining utility for larger quantities of similar lives STILL leads me to prefer a longer timeline for humans (or human-empathetic intelligent beings), in order to get more variety. I strongly prefer species-longevity over individual non-me longevity, and variety of type and context over sheer quantity.
Which directly leads to preferring species survival over current quantity or quality. The future is big enough that we'll come back from almost any setback short of true extinction. What ACTIONS I take that have any impact whatsoever on that are a completely different calculation, though.
I think “the very repugnant conclusion is actually fine” does pretty well against its alternatives. It’s totally possible that our intuitive aversion to it comes from just not being able to wrap our brains around some aspect of (a) how huge the numbers of “barely worth living” lives would have to be, in order to make the very repugnant conclusion work; (b) something that is just confusing about the idea of “making it possible for additional people to exist.”
While this doesn't sound crazy to me, I'm skeptical that my anti-VRC intuitions can be explained by these factors. I think you can get something "very repugnant" on scales that our minds can comprehend (and not involving lives that are "barely worth living" by classical utilitarian standards). Suppose you can populate* some twin-Earth planet with either a) 10 people with lives equivalent to the happiest person on real Earth, or b) one person with a life equivalent to the most miserable person on real Earth plus 8 billion people with lives equivalent to the average resident of a modern industrialized nation.
I'd be surprised if a classical utilitarian thought the total happiness minus suffering in (b) was less than in (a). Heck, 8 billion might be pretty generous. But I would definitely choose (a).
To me the very-repugnance just gets much worse the more you scale things up. I also find that basically every suffering-focused EA I know is not scope-neglectful about the badness of suffering (at least, when it's sufficiently intense), or in any area other than population ethics. So it would be pretty strange if we just happened to be falling prey to that error in thought experiments where there's another explanation—i.e., we consider suffering especially important—which is consistent with our intuitions about cases that don't involve large numbers.
* As usual, ignore the flow-through effects on other lives.
Consider three possible worlds:
- World A: 5 billion future people have good lives. Let’s say their lives are a 8/10 on some relevant scale (reducing the quality of a life to a number is a simplification; see footnote for a bit more on this5).
- World B: 5 billion future people have slightly better than good lives, let’s say 8.1/10. And there are an additional 5 billion people who have not-as-good-but-still-pretty-good lives, let’s say 7/10.
- World C: 10 billion future people have good lives, 8/10.
Claim: World B > World A and World C > World B. Therefore, World C > World A.
I suspect that there's some confusion lurking in the fact that "the goodness of a life" isn't well-defined. When I look at these three worlds, I think I probably prefer C over B, but I'm unsure, and I don't have any clear position about A versus B. Part of the difficulty seems to be that in order to evaluate them, I would like to do something like visualizing the daily lives of people in world A and world B. But since I don't know what a "7/10 life", "8/10 life" and "8.1/10 life" mean in concrete terms, I have no idea of what I should visualize.
Hi Holden, thanks for writing this!
I think the currently most promising way to formally capture your person-affecting but not antinatalist intuitions in a utilitarian view would be something like Teruji Thomas, 2019, The Asymmetry, Uncertainty, and the Long Term from GPI, and I would strongly recommend looking into it. To summarize:
I've also thought of something similar here, but far less developed. I think such views have been underexplored. I suspect one of the reasons is because these approaches tend to be more mathematically complex to develop in full (the independence of irrelevant alternatives is a strong simplifying assumption), so that limits who can contribute and requires more work from each to make progress. Teruji Thomas has both a PhD in mathematics and a PhD in philosophy.
If you're still sympathetic to the view that making more happy people is inherently good overall, not just able to offset losses, that might be best captured through moral uncertainty rather than fitting it all in one view. You could assign some credence to a symmetric or weakly asymmetric utilitarian view (possibly with moral uncertainty over how weakly asymmetric it should be), and some credence to something like the view by Teruji Thomas above.
If we could do something to lower the probability of the human race going extinct,1 that would be really good. But how good? Is preventing extinction more like “saving 8 billion lives” (the number of people alive today), or “saving 80 billion lives” (the number who will be alive over the next 10 generations) … or "saving 625 quadrillion lives" (an Our World in Data estimate of the number of people who could ever be born) ... or “saving a comically huge number of lives" (Nick Bostrom argues for well over 10^46 as the total number of people, including digital people, who could ever exist)?
More specifically, is “a person getting to live a good life, when they otherwise would have never existed” the kind of thing we should value? Is it as good as “a premature death prevented?”
Among effective altruists, it’s common to answer: “Yes, it is; preventing extinction is somewhere around as good as saving [some crazy number] of lives; so if there’s any way to reduce the odds of extinction by even a tiny amount, that’s where we should focus all the attention and resources we can.”
I feel conflicted about this.
Reflecting these mixed feelings, I'm going to examine the philosophical case for caring about “extra lives lived” (putting aside the first bullet point above), via a dialogue between two versions of myself: Utilitarian Holden (UH) and Non-Utilitarian Holden (NUH).2
This represents actual dialogues I’ve had with myself (so neither side is a pure straw person), although this particular dialogue serves primarily to illustrate UH's views and how they are defended against initial and/or basic objections from NUH. In future dialogues, NUH will raise more sophisticated objections.
This is part of a set of dialogues on future-proof ethics: trying to make ethical decisions that we can remain proud of in the future, after a great deal of (societal and/or personal) moral progress. (Previous dialogue here, though this one stands on its own.)
A couple more notes before I finally get started:
Dialogue on “extra lives lived”
To keep it clear who's talking when, I'm using -UH- for "Utilitarian Holden" and -NUH- for "non-Utilitarian Holden." (In the audio version of this piece, my wife voices NUH.)
-UH-
Let’s start here:
In that situation, I think everyone would be saying: “How is this a question? Even if your impact on extinction risk is small, even if it’s uncertain and fuzzy, there are just SO MANY MORE people affected by that. If you choose to focus on today’s world, you’re essentially saying that you think today’s people count more than 10,000 times as much as future people.
“Now granted, most of the people alive in your time DO act that way - they ignore the future. But someday, if society becomes morally wiser, that will look unacceptable; similarly, if you become morally wiser, you'll regret it. It's basically deciding that 99.999%+ of one’s fellow humans aren’t worth worrying about, just because they don’t exist yet.
“Do the forward-looking thing, the future-proof thing. Focus on helping the massive number of people who don’t exist yet.”
-NUH-
I feel like you are skipping a very big step here. We’re talking about what potential people who don’t exist yet would say about giving them a chance to exist? Does that even make sense?
That is: it sounds like you’re counting every “potential person” as someone whose wishes we should be respecting, including their wish to exist instead of not exist. So among other things, that means a larger population is better?
-UH-
Yes.
-NUH-
I mean, that’s super weird, right? Like is it ethically obligatory to have as many children as you can?
-UH-
It’s not, for a bunch of reasons.
The biggest one for now is that we’re focused on thin utilitarianism - how to make choices about actions like donating and career choice, not how to make choices about everything. For questions like how many children to have, I think there’s much more scope for a multidimensional morality that isn’t all about respecting the interests of others.
I also generally think we’re liable to get confused if we’re talking about reproductive decisions, since reproductive autonomy is such an important value and one that has historically been undermined at times in ugly ways. My views here aren’t about reproductive decisions, they’re about avoiding existential catastrophes. Longtermists (people who focus on the long-run future, as I’m advocating here) tend to focus on things that could affect the ultimate, long-run population of the world, and it’s really unclear how having children or not affects that (because the main factors behind the ultimate, long-run population of the world have more to do with things like the odds of extinction and of explosive civilization-wide changes, and it's unclear how having children affects those).
So let’s instead stay focused on the question I asked. That is: if you prevent an existential catastrophe, so that there’s a large flourishing future population, does each of those future people count as a “beneficiary” of what you did, such that their benefits aggregate up to a very large number?
-NUH-
OK. I say no, such “potential future people” do not count. And I’m not moved by your story about how this may one day look cruel or inconsiderate. It’s not that I think some types of people are less valuable than others, it’s that I don’t think increasing the odds that someone ever exists at all is benefiting them.
-UH-
Let’s briefly walk through a few challenges to your position. You can learn more about these challenges from the academic population ethics literature; I recommend Hilary Greaves’s short piece on this.
Challenge 1: Future people and the “mere addition paradox”
-UH-
So you say you don’t see “potential future people” as “beneficiaries” whose interests count. But let’s say that the worst effects of climate change won’t be felt for another 80 years or so, in which case the vast majority of people affected will be people who aren’t alive today. Do you discount those folks and their interests?
-NUH-
No, but that’s different. Climate change isn’t about whether they get to exist or not, it’s about whether their lives go better or worse.
-UH-
Well, it’s about both. The world in which we contain/prevent/mitigate climate change contains completely different people in the future from the world in which we don’t. Any difference between two worlds will ripple chaotically and affect things like which sperm fertilize which eggs, which will completely change the future people that exist.
So you really can’t point to some fixed set of people that is “affected” by climate change. Your desire to mitigate climate change is really about causing there to be better off people in the future, instead of completely different worse off people. It’s pretty hard to maintain this position while also saying that you only care about “actual” rather than “potential” people, or “present” rather than “future” ones.
-NUH-
I can still take the position that:
-UH-
That’s going to be a tough position to maintain.
Consider three possible worlds:
My guess is that you think World B seems clearly better than World A - there are 5 billion “better-off instead of worse-off” future people, and the added 5 billion people seem neutral (not good, not bad).
But I’d also guess you think World C seems clearly better than World B. The change is a small worsening in quality of life for the better-off half of the population, and a large improvement for the worse-off half.
But if World C is better than World B and World B is better than World A, doesn’t that mean World C is better than World A? And World C is the same as World A, just a bigger population.
-NUH-
I admit my intuitions are as you say. I prefer B when comparing it to A, and C when comparing it to B. However, when I look at C vs. A, I’m not sure what to think. Maybe there is a mistake somewhere - for example, maybe I should think that it’s bad for additional people to come to exist.
-UH-
That would imply that the human race going extinct would be great, no? Extinction would prevent massive numbers of people from ever existing.
-NUH-
That is definitely not where I am.
OK, you’ve successfully got me puzzled about what’s going on in my brain. Before I try to process it, how about you confuse me more?
Challenge 2: Asymmetry
-UH-
Sure thing. Let’s talk about another problem with the attempt to be “neutral” on whether there are more or fewer people in the future.
Say that you can take some action to prevent a horrible dystopia from arising in a distant corner of the galaxy. In this dystopia, the vast majority of people will wish they didn’t exist, but they won’t have that choice. You have the opportunity to ensure that, instead of this dystopia, there will simply be nothing there. Does that opportunity seem valuable?
-NUH-
It does, enormously valuable.
-UH-
OK. The broader intuition here is that preventing lives that are worse than nonexistence has high ethical value - does that seem right?
-NUH-
Yes.
-UH-
Now you’re in a state where you think preventing bad lives is good, but preventing good lives is neutral.
But the thing is, every time a life comes into existence, there’s some risk it will be really bad (such that the person living it wishes they didn’t exist). So if you count the bad as bad and the good as neutral, you should think that each future life is purely a bad thing - some chance it’s bad, some chance it’s neutral. So you should want to minimize future lives.
Or at the civilization level: say that if humanity continues existing, there’s a 99% chance we will have an enormous (at least 10^18 people) flourishing civilization, and a 1% chance we’ll end up in an equally enormous, horrible dystopia. And even the flourishing civilization will have some people in it who wish they didn’t exist. Confronting this possibility, you should hope that humanity doesn’t continue existing, since then there won’t be any of these “people who wish they didn’t exist.” You should, again, think that extinction is a great ethical good.
-NUH-
Yikes. Like I’ve said, I don’t think that.
-UH-
In that case, I think the most natural way out of this is to conclude that a huge flourishing civilization would be good enough to compensate - at least partly - for the risk of a huge dystopia.
That is: if you’re fine with a 99% chance of a flourishing civilization and a 1% chance of a dystopia, this implies that a flourishing civilization is at least 1% as good as a dystopia is bad.
And that implies that “10^18 flourishing lives” are at least 1% as good as “10^18 horribly suffering lives” are bad. 1% of 10^18 is a lot, as we’ve discussed!
-NUH-
Well, you’ve definitely made me feel confused about what I think about this topic. But that isn’t the same as convincing me that it’s good for there to be more persons. I see how trying to be neutral about population size leads to weird implications. But so does your position.
For example, if you think that adding more lives has ethical value, you end up with what's called the repugnant conclusion. Actually, let’s skip that and talk about the very repugnant conclusion. I’ll give my own set of hypothetical worlds:
There has to be some “larger number N” such that you prefer World E to World D. That’s a pretty wacky seeming position too!
Theory X
-UH-
That’s true. There’s no way of handling questions like these (aka population ethics) that feels totally satisfactory for every imaginable case.
-NUH-
That you know of. But there may be some way of disentangling our confusions about this topic that leaves the anti-repugnant-conclusion intuition intact, and leaves mine intact too. I’m not really feeling the need to accept one wrong-seeming view just to avoid another one.
-UH-
“Some way of disentangling our confusions” is what Derek Parfit called theory X. Population ethicists have looked for it for a while. They’ve not only not found it, they’ve produced impossibility theorems heavily implying that it does not exist.
That is, the various intuitions we want to hold onto (such as “the very repugnant conclusion is false” and “extinction would not be good” and various others) collectively contradict each other.
So it looks like we probably have to pick something weird to believe about this whole “Is it good for there to be more people?” question. And if we have to pick something, I’m going to go ahead and pick what’s called the total view: the view that we should maximize the sum total of the well-being of all persons. You could think of this as if our “potential beneficiaries” include all persons who ever could exist, and getting to exist6 is a benefit that is capable of overriding significant harms. (There is more complexity to the total view than this, but it's not the focus of this piece.)
I think there are a number of good reasons to pick this general approach:
-NUH-
Maybe part of what’s confusing here is something like:
-UH-
That approach would contradict some of the key principles of “other-centered ethics” discussed previously.
I previously argued that once you think something counts as a benefit, with some amount of value, a high enough amount of that thing can swamp all other ethical considerations. In the example we used previously, enough of “helping someone have a nice day at the beach” can outweigh “helping someone avoid a tragic death.”
-NUH-
Hmm.
If this were a philosophy seminar, I would think you were making a perfectly good case here.
But the feeling I have at this juncture is not so much “Ah yes, I see how all of those potential lives are a great ethical good!” as “I feel like I’ve been tricked/talked into seeing no alternative.”
I don’t need to “pick a theory.” I can zoom back out to the big picture and say “Doing things that will make it possible for more future people to exist is not what I signed up for when I set out to donate money to make the world a better place. It’s not the case that addressing today’s injustices and inequities can be outweighed by that goal.”
I don’t need a perfectly consistent approach to population ethics, I don’t need to follow “rules” when giving away money. I can do things that are uncontroversially valuable, such as preventing premature deaths and improving education; I can use math to maximize the amount of those things that I do. I don’t need a master framework that lands in this strange place.8
-UH-
Again, I think a lot of the detailed back-and-forth has obscured the fact that there are simple principles at play here:
I might end up getting it wrong and doing zero good. So might you. I am taking my best shot at avoiding the moral prejudices of my day and focusing my giving on helping others, defined fairly and expansively.
For further reading on population ethics, see:
Closing thoughts
I feel a lot of sympathy for the closing positions of both UH and NUH.
I think something like UH’s views do, in fact, give me the best shot available at an ethics that is highly “other-centered” and “future-proof.” But as I’ve pondered these arguments, I’ve simultaneously become more compelled by some of UH’s unusual views, and less convinced that it’s so important to pursue an “other-centered” or “future-proof” ethics. At some point in the future, I’ll argue that these ideals are probably unattainable anyway, which weakens my commitment to them.
Ultimately, if we put this in a frame of "deciding how to spend $1 billion," the arguments in this and previous pieces would move me to spend a chunk of it on targeting existential risk reduction - but probably not the majority (if they were the only arguments for targeting existential risk reduction, which I don't think they are). I find UH compelling, but not wholly convincing.
However, there is a different line of reasoning for focusing on causes like AI risk reduction, which doesn’t require unusual views about population ethics. That’s the case I’ve presented in the Most Important Century series, and I find it more compelling.
Footnotes
I’m sticking with “extinction” in this piece rather than discuss the subtly different idea of “existential catastrophe.” The things I say to extinction mostly apply to existential catastrophe, but I think that adds needless confusion for this particular purpose. ↩
In this case, "Utilitarian Holden" will be arguing for a particular version of utilitarianism, not for utilitarianism generally. But it's the same character from a previous dialogue. ↩
It's about 1.5x as much as the Our World in Data estimate, and not everyone would call that "close" in every context, but I think for the purposes of this piece, the two numbers have all the same implications, and "one quintillion" is simpler and easier to talk about than 625 quadrillion. ↩
"1% of 1% of 1% of 1%" is 1%*1%*1%*1%. 1%*1%*1%*1%*10^16 (the hypothesized number of future lives, including only a 1% chance that humanity avoids extinction for long enough) = 10^8, or 100 million. ↩
For this example, what the numbers are trying to communicate is that all things considered, some of these lives would rank more highly than others if people were choosing what conditions they would want to live their life under. They're supposed to implicitly incorporate reactions to things like inequality - so for example, World B, which has more inequality than Worlds A and C, might have to have better conditions to "compensate" such that the 5 billion people with "8.1/10" lives still prefer their conditions in World B to what they would be in World A. ↩
(Assuming that existence is preferred to non-existence given the conditions of existence) ↩
For one example (that neither UH nor NUH finds particularly compelling, but others might), see this comment. ↩
This general position does have some defenders in philosophy - see https://plato.stanford.edu/entries/moral-particularism/ ↩