A lot of people speak in terms of "existential risk from artificial intelligence" or "existential risk from nuclear war." While this is fine to some approximation, I rarely see it pointed out that this is not how risk works. Existential risk refers to the probability of a set of outcomes, and those outcomes are not defined in terms of their cause.

To illustrate why this is a problem, observe that there are numerous ways for two or more things-we-call-existential-risks to contribute equally to a bad outcome. Imagine nuclear weapons leading to a partial collapse of civilization, leading to an extremist group ending the world with an engineered virus. Do we attribute this to existential risk from nuclear weapons or from Bio-Terrorism? That question is neither well-defined, nor does it matter. All that matters is how much each factor contributes to [existential risk of any form].

Thus, ask not "is climate change an existential risk," but "does climate change contribute to existential risk?" Everything we care about is contained in the second question.

New Comment
7 comments, sorted by Click to highlight new comments since:

Assuming the goal is to prevent the existential risk, how is this view beneficial? Aren't the conditions for nuclear war different enough from those of climate change to make it too much to expect a single policy that prevents both?

With the "existential risk from " framing, I've heard people say things like "climate change is not an existential risk, but it might contribute to other existential risks." Other people have talked about things like "second-order existential risks." This strikes me as fairly confused. In particular, to assess the expected impact of some intervention, you don't care about whether effects are first-order, second-order, or even less direct, but the "classical" view pushes you to regard them as qualitatively different things. Conversely, the framing "how does climate change contribute to existential risk" subsumes -th order effects for all .

Less abstractly, suppose you work on mitigating climate change and want to assess how much this influences existential risk. The question you care about is

  • By how much does my intervention on climate change mitigate existential risk?

This immediately leads to the follow-up question

  • How much does climate change contribute to existential risk?

Which is precisely the framing I suggest. Thus, it perfectly captures the thing we care about. Conversely, the classical framing "existential risk from climate change" would ask something analogous to

  • How likely are we to end up in a world where climate change is the easily recognized primary cause for the end of the world?

And this is simply not the right question.

So, this is about taking the causes seriously even when they are not the direct final link in the chain before extinction?

Yes, in the sense that I think what you said describes how the views differ. It's not how I would justify the view; I think the fundamental reason why classical view is inaccurate is that

Existential risk refers to the probability of a set of outcomes, and those outcomes are not defined in terms of their cause.

I.e., there is nothing in the definition of existential risk that Bostrom or anyone else gives that references the cause.

By how much does my intervention on climate change mitigate existential risk?

The question is bad because it presupposes that the intervention could only decrease and not increase existential risk. 

To the extent that climate change might lead to increasing other x-risks it's because it destroys capital (buildings next to the sea and fertile land) and societies has to deal that that loss of capital. 

The more economic growth we have the easier it is for society to deal with loss of capital. An intervention that pays for a lower carbon foodprint with lower economic growth might very well make things worse instead of better.

The core about what scientific thinking is about is to isolate factors. It's much easier to reason about direct effects and that's why it makes sense to investigate direct effects. 

To the extent that climate change might lead to increasing other x-risks it's because it destroys capital (buildings next to the sea and fertile land) and societies has to deal that that loss of capital. 
The more economic growth we have the easier it is for society to deal with loss of capital. An intervention that pays for a lower carbon foodprint with lower economic growth might very well make things worse instead of better.

That seems to me to be another argument against the standard framing. If you look at "x-risk from climate change," you could accurately conclude that your intervention decreases x-risk from climate change – without realizing that it increases existential risk overall.

If you ask instead, "By how much does my intervention on climate change affect existential risk?" (I agree that using 'mitigate' was bad for the reasons you say), you could conclude that it leads to an increase because it stifles economic growth. Once again, the standard framing doesn't ask the right question.

In general, the new framing does not prevent you from isolating factors, it only prevents you from ignoring part of the effect of a factor.

I think you are deluding yourself when you think you can examine the future >50 years down the road without ignoring parts of the effects of a factor.