An existential risk (or x-risk) is a risk that poses astronomically large negative consequences for humanity, such as human extinction or permanent global totalitarianism.
Nick Bostrom introduced the term "existential risk" in his 2002 paper "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards."1 In the paper, Bostrom defined an existential risk as:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
The Oxford Future of Humanity Institute (FHI) was founded by Bostrom in 2005 in part to study existential risks. Other institutions with a generalist focus on existential risk include the Centre for the Study of Existential Risk.
FHI's existential-risk.org FAQ notes regarding the definition of "existential risk":
An existential risk is one that threatens the entire future of humanity. [...]
“Humanity”, in this context, does not mean “the biological species Homo sapiens”. If we humans were to evolve into another species, or merge or replace ourselves with intelligent machines, this would not necessarily mean that an existential catastrophe had occurred — although it might if the quality of life enjoyed by those new life forms turns out to be far inferior to that enjoyed by humans.
Bostrom2 proposes a series of classifications for existential risks:
The total negative results of an existential risk could amount to the total of potential future lives not being realized. A rough and conservative calculation3 gives us a total of 10^54 potential future humans lives – smarter, happier and kinder then we are. Hence, almost no other task would amount to so much positive impact than existential risk reduction.
Existential risks also present an unique challenge because of their irreversible nature. We will never, by definition, experience and survive an extinction risk4 and so cannot learn from our mistakes. They are subject to strong observational selection effects 5. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our present existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high extinction risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.
Another related idea is that of a suffering risk (or s-risk).
The focus on existential risks on LessWrong dates back to Bostrom's 2002 paper Astronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that "the chief goal for utilitarians should be to reduce existential risk". Bostrom writes:
If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.
The concept is expanded upon in his 2012 paper Existential Risk Prevention as Global Priority