Myself — and many others — have argued that longtermism is not needed to argue for x-risk mitigation. That all such actions can be adequately justified within a neartermist framework, and given the poor reception/large inferential distance/inaccessibility of longtermist arguments, we might be better served arguing for x-risk mitigation within a strictly near termist framework.
But this is not fully accurate.
Averting extinction makes sense in near termist ethical frameworks (8 billion people dying is very bad), but extinction is not the only category of existential risk, and it's the only one that can readily be justified within neartermist frameworks.
Longtermism and Existential Risks
Excluding extinction, all the other existential risks — the very concept of an "existential risk" itself — implicitly rely on longtermism.
Toby Ord defined an existential catastrophe as an event that permanently curtails the longterm potential of humanity/human civilisation.
A few classes of existential catastrophe other than extinction:
Value lockin
Irreversible technological regression
Any discrete event that prevents us from reaching technological maturity
Any discrete event that leads to Bostrom's "Astronomical Waste"
(I would also add "technological stagnation" to the list. It's not a discrete event [so Ord didn't consider it as a catastrophe], but it has the same effect of curtailing the long term potential of human civilisation.)
We cannot even conceive of an existential catastrophe without a framework of "longterm potential of human civilisation", concepts of "technological maturity", "astronomical waste", etc.
All of these are concepts that are defined only within a longtermist framework.
Thus, existential risk mitigation is inherently a longtermist prospect.
Caveats
While extinction risks aren't the only existential risks, they are the one that has attracted the supermajority of attention and funding.
Excluding extinction risk mitigation, other longtermist projects looks like:
Grand strategy for humanity
Promoting more adequate/resilient institutions
Better mechanisms for coordination and cooperation
Governance of advanced/speculative technologies
Space settlement and colonisation
Etc.
Some of these actions may not have that large an effect on near term extinction risks.
Maybe there's an argument that we should argue for taking actions to mitigate near term extinction risks separately from other more inherently longtermist actions.
Introduction
Myself — and many others — have argued that longtermism is not needed to argue for x-risk mitigation. That all such actions can be adequately justified within a neartermist framework, and given the poor reception/large inferential distance/inaccessibility of longtermist arguments, we might be better served arguing for x-risk mitigation within a strictly near termist framework.
But this is not fully accurate.
Averting extinction makes sense in near termist ethical frameworks (8 billion people dying is very bad), but extinction is not the only category of existential risk, and it's the only one that can readily be justified within neartermist frameworks.
Longtermism and Existential Risks
Excluding extinction, all the other existential risks — the very concept of an "existential risk" itself — implicitly rely on longtermism.
Toby Ord defined an existential catastrophe as an event that permanently curtails the longterm potential of humanity/human civilisation.
A few classes of existential catastrophe other than extinction:
(I would also add "technological stagnation" to the list. It's not a discrete event [so Ord didn't consider it as a catastrophe], but it has the same effect of curtailing the long term potential of human civilisation.)
We cannot even conceive of an existential catastrophe without a framework of "longterm potential of human civilisation", concepts of "technological maturity", "astronomical waste", etc.
All of these are concepts that are defined only within a longtermist framework.
Thus, existential risk mitigation is inherently a longtermist prospect.
Caveats
While extinction risks aren't the only existential risks, they are the one that has attracted the supermajority of attention and funding.
Excluding extinction risk mitigation, other longtermist projects looks like:
Some of these actions may not have that large an effect on near term extinction risks.
Maybe there's an argument that we should argue for taking actions to mitigate near term extinction risks separately from other more inherently longtermist actions.