A lot of people speak in terms of "existential risk from artificial intelligence" or "existential risk from nuclear war." While this is fine to some approximation, I rarely see it pointed out that this is not how risk works. Existential risk refers to the probability of a set of outcomes, and those outcomes are not defined in terms of their cause.
To illustrate why this is a problem, observe that there are numerous ways for two or more things-we-call-existential-risks to contribute equally to a bad outcome. Imagine nuclear weapons leading to a partial collapse of civilization, leading to an extremist group ending the world with an engineered virus. Do we attribute this to existential risk from nuclear weapons or from Bio-Terrorism? That question is neither well-defined, nor does it matter. All that matters is how much each factor contributes to [existential risk of any form].
Thus, ask not "is climate change an existential risk," but "does climate change contribute to existential risk?" Everything we care about is contained in the second question.
That seems to me to be another argument against the standard framing. If you look at "x-risk from climate change," you could accurately conclude that your intervention decreases x-risk from climate change – without realizing that it increases existential risk overall.
If you ask instead, "By how much does my intervention on climate change affect existential risk?" (I agree that using 'mitigate' was bad for the reasons you say), you could conclude that it leads to an increase because it stifles economic growth. Once again, the standard framing doesn't ask the right question.
In general, the new framing does not prevent you from isolating factors, it only prevents you from ignoring part of the effect of a factor.