As someone who lurks a lot around LW but hasn't thought very seriously about x-risk, I found this post very useful. It helped clarify a few terms I often see around the site (e.g. Great Filter) and synthesized a lot of common attitudes that I've noticed. Thanks!
I have not read your post yet, but thought I'd mention that intro posts are sometimes liked more if they have a emotion-evoking picture.
Also, research shows that the most popular pictures on internet are the pictures of cats.
Therefore, as a first step towards existential risk awareness, we need an official Existential Risk Cat mascot.
(I am not an artist, but I imagine a black cat with stars or galaxies in its fur, playing with a ball of string... where the ball is the Earth. If you are an artist, this is probably your best way to contribute to the future of humanity, so please give it a try...)
Since the idea is existential risk, why not just use Tacgnol? It works especially well, since tacgnol also represents scope insensitivity. (Gnoool si Tacgnol.)
What about the plot of severity vs scope? OK maybe it's not quite a "picture" but surely the "emotion-evoking" box is ticked.
Meta
-This is not a duplicate of the original less wrong x-risk primer. I like lukeprog's article just fine, but it works mostly as a punch in the gut for anyone who needs a wake up call. Very little of the actual research on x-risk is discussed in that article, so the gap that was there before it was published was largely there after. My article and his would work well being read together.
-This was originally written to accompany a presentation I gave, hence the random inclusion of both hyperlinks and citations. It also lives, with minor differences, here.
-Summary: For various reasons the future is scarier than a lot of people realize. All sorts of things could lead to the destruction of the human species, ranging from asteroid impacts to runaway AIs, and these things are united by the fact that any one of them could destroy the value of the future from a human perspective. The dangers can be separated into bangs (very sudden extinction), crunches (not fatal but crippling), shrieks (mostly curse with a little blessing), and whimpers (a long, slow fading), though there is nothing sacred about these categories. Some humans have are trying to prevent this, though their methods are still in their infancy. Much more should be done to support them.
In the beginning
I want to start this off with a quote, which nicely captures both how I use to feel about the idea of human extinction and how I feel about it now:
Back when I was a Christian I gave some thought to the rapture, which is not entirely unlike extinction as far as most ten-year-olds can tell. Sometime during this period I found a slim little book of fiction which portrayed a damned soul's experience of burning in hell forever, and that did scare me. Such torment, as luck would have it, is easy enough to avoid if you just call god the right name and ask forgiveness often enough.
When I was old enough to contemplate possible secular origins of the apocalypse, I was both an atheist and one of the people who tell glorious stories about the future. The potential fruits of technological development, from the end of aging to the creation of a benevolent super-human AI, excited me, and still excite me now. No doubt I would've admitted the possibility of human extinction, I don't really remember. But there wasn't the kind of internal siren that should go off when you start thinking seriously about one of the Worst Possible Outcomes. That I would remember.
But as I've gotten older I've come to appreciate that most of us are not afraid enough of the future. Those who are afraid, are often afraid for the wrong reasons.
What is an Existential Risk?
An existential risk or x-risk (to use a common abbreviation) is "...one that threatens to annihilate Earth-originating intelligent life or permanently and drastically to curtail its potential" (Bostrom 2006). The definition contains some subtlety, as not all x-risks involve the outright death of every human. Some could take potentially eons to complete, and some are even survivable. Positioning x-risks within the broader landscape of risks yields something like this chart:
At the top right extreme is where Cthulu sleeps. They are risks that carry the potential to drastically and negatively affect this and every subsequent human generation. So as not to keep everyone in suspense, let's use this chart to put a face on the shadows.
Four Types of Existential Risks
Philosopher Nick Bostrom has outlined four broad categories of x-risk. In more recent papers he hasn't used the terminology that I'm using here, so maybe he thinks the names are obsolete. I find them evocative and useful, however, so I'll stick with them until I have a reason to change.
Bangs are probably the easiest risks to conceptualize. Any event which causes the sudden and complete extinction of humanity would count as a Bang. Think asteroid impacts, supervolcanic eruptions, or intentionally misused nanoweapons.
Crunches are risks which humans survive but which leaves us permanently unable to navigate to a more valuable future. An example might be depleting our planetary resources before we manage to build the infrastructure needed to mine asteroids or colonize other planets. After all the die-offs and fighting, some remnant of humanity could probably survive indefinitely, but it wouldn't be a world you'd want to wake up in.
Shrieks occur when a post-human civilization develops but only manages to realize a small amount of its potential. Shrieks are very difficult to effectively categorize, and I'm going to leave examples until the discussion below.
Whimpers are really long-term existential risks. The most straight forward is the heat death of the universe; within our current understanding of physics, no matter how advanced we get we will eventually be unable to escape the ravages of entropy. Another could be if we encounter a hostile alien civilization that decides to conquer us after we've already colonized the galaxy. Such a process could take a long time, and thus would count as a whimper.
Just because whimpers are so much less immediate than other categories of risk and x-risk doesn't automatically mean we can just ignore them; it has been argued that affecting the far future is one of the most important projects facing humanity, and thus we should take the time to do it right.
Sharp readers will no doubt have noticed that there is quite a bit of fuzziness to these classifications. Where, for example, should we put all-out nuclear war, the establishment of an oppressive global dictatorship, or the development of a dangerous and uncontrollable superintelligent AI? If everyone dies in the war it counts as a bang, but if it makes a nightmare of the biosphere while leaving a good fraction of humanity intact it would be a crunch. A global dictatorship wouldn't be an x-risk unless it used some (probably technological) means to achieve near-total control and long-term stability, in which case it would be a crunch. But it isn't hard to imagine such a situation in which some parts of life did get better, like if a violently oppressive government continued to develop advanced medicines so that citizens were universally healthier and longer-lived than people today. If that happened, it would be a Shriek. A similar analysis applies to the AI, with the possible outcomes being Bang, Crunch, and Shriek depending on just how badly we misprogrammed it.
What Ties These Threads Together?
Even if you think existential threats deserve more attention, the rationale for treating them as a diverse but unified phenomenon may not be obvious. In addition to the crucial but (relatively) straightforward work of, say, tracking Near-Earth Objects (NEOs), existential risk researchers also think seriously about alien invasions and rogue AIs. With such a range of speculativeness, why group x-risks together at all?
It turns out that they share a cluster of features which does give them some cohesion and make them worth studying under a single label, not all of which I discuss here. First and most obvious is that should any of them occur the consequences would be truly vast relative to any other kind of risk. To see why, think about the difference between a catastrophe that kills 99% of humanity and one that kills 100%. As big a tragedy as the former would be, there's a chance humans could recover and build a post-human civilization. But if every person dies, then the entire value of our future is lost (Bostrom 2013).
Second, these are not risks which admit of a trial and error approach. Pretty much by definition a collision with an x-risk will spell doom for humanity, and so we must be more proactive in our strategies for reducing them. Related to this, we as a species have neither the cultural nor biological instincts needed to prepare us for the possibility of extinction. A group of people might live through several droughts and thus develop strong collective norms towards planning ahead and keeping generous food reserves. But they cannot have gone extinct multiple times, and thus they can't rely on their shared experience and cultural memory to guide them in the future. I certainly hope we can develop a set of norms and institutions which makes us all safer, but we can't wait to learn from history. We're going to have to start well in advance, or we won't survive.
A final commonality I'll mention is that the solutions to quite a number of x-risks are themselves x-risks. A powerful enough government could effectively halt research into dangerous pathogens or nano-replicators. But given how States have generally comported themselves in the past, one would do well to be cautious before investing them with that kind of power. Ditto for a superhuman AI, which could set up an infrastructure to protect us from asteroids, nuclear war, or even other less Friendly AI. Get the coding just a little wrong, though, and it might reuse your carbon to make paperclips.
It is indeed a knife edge along which we creep towards the future.
Measuring the Monsters
A first step is getting straight about how likely survival is. The reader may have encountered predictions of the "we have only a 50% chance of surviving the next hundred years" variety. Examining the validity of such estimates is worth doing, but I won't be taking up that challenge here; I tend to agree that these figures involves a lot of subjective judgement, but that even if the chances were very very small it would still be worth taking seriously (Bostrom 2006). At any rate, it seems to me that trying to calculate an overall likelihood of human extinction is going to be premature before we've nailed down probabilities for some of the different possible extinction scenarios. It is to the techniques which x-risk researchers rely on to try and do this that I now turn.
X-risk-assessments rely on both direct and indirect methods (Bostrom 2002). Using a direct method involves building a detailed causal model of the phenomenon and using that to generate a risk probability, while indirect methods include arguments, thought experiments, and information that we use to constrain and refine our guesses.
As far as I know for some x-risks we could use direct methods if we just had a way to gather the relevant information. If we knew where all the NEOs were we could use settled physics to predict whether any of them posed a threat and then prioritize accordingly. But we don't where they all are, so we might instead examine the frequency of impacts throughout the history of the Earth and then reason about whether or not we think an impact will happen soon. It would be nice to exclusively use direct methods, but we supplement with indirect methods when we can't, and of course for x-risks like AI we are in an even more uncertain position than we are for NEOs.
The Fermi Paradox
Applying indirect methods can lead to some strange and counter-intuitive territory, an example of which is the mysteries surrounding the Fermi Paradox. The central question is: in a universe with so many potential hotbeds of life, why is it that when we listen for stirring in the void all we hear is silence? Many feel that the universe must be teeming with life, some of it intelligent, so why haven't we see any sign of it yet?
Musing about possible solutions to the Fermi Paradox can be a lot of fun, and it's worth pointing out that we haven't been looking that long or that hard for signals yet. Nevertheless I think the argument has some meat to it.
Observing this state of affairs, some have postulated the existence of at least one Great Filter, a step in the chain of development from the first organisms to space-faring civilizations that must be extremely hard to achieve.
This is cause for concern because the Great Filter could be in front of us or behind us. Let me explain: imagine a continuum with the simplest self-replicating molecules on one side and the Star Trek Enterprise on the other. From our position on the continuum we want to know whether or not we have already passed one of the hardest steps, but we have only our own planet to look at. So imagine that we send out probes to thousands of different worlds in the hopes that we will learn something.
If we find lots of simple eukaryotes that means that the Great Filter is probably not before the development of membrane-bound organelles. The list of possible places on the continuum the Great Filter could be shrinks just a little bit. If instead we find lots of mammals and reptiles (or creatures that are very different but about as advanced), that means the Great Filter is probably not before the rise of complex organisms, so the places the Great Filter might be hiding shrinks again. Worst of all would be if we find the dead ruins of many different advanced civilizations. This would imply that the real killer is yet to come, and we will almost certainly not survive it.
As happy as many people would be to discover evidence of life in the universe, a case has been made that we should hope to find only barren rocks waiting for us in the final frontier. If not even simple bacteria evolve on most worlds, then there is still a chance that the Great Filter is behind us, and we can worry only about the new challenges ahead, which may or not be Filters as great as the ones in the past.
If all this seems really abstract out there, that's because it is. But I hope it is clear how this sort of thinking can help us interpret new data, make better guesses, form new hypotheses, etc. When dealing with stakes this high and information this limited, one must do the best they can with what's available.
Mitigation
What priority should we place on reducing existential risk and how can we do that? I don't know of anyone who thinks all our effort should go towards mitigating x-risks; there are lots of pressing issues which are not x-risks that are worth our attention, like abject poverty or geopolitical instability. But I feel comfortable saying we aren't doing nearly as much as we should be. Given the stakes and the fact that there probably won't be a second chance we are going to have to meet x-risks head on and be aggressively proactive in mitigating them.
Suppose we taboo 'aggressively proactive', what's left? Well the first step, as it so often is, will be just to get the right people to be aware of the problem (Bostrom 2002). Thankfully this is starting to be the case as more funding and brain power go into existential risk reduction. We have to get to a point where we are spending at least as much time, energy, and effort making new technology safe as we do making it more powerful. More international cooperation on these matters will be necessary, and there should be some sort of mechanism by which efforts to develop existentially-threatening technologies like super-virulent pathogens can be stopped. I don't like recommending this at all, but almost anything is preferable to extinction.
In the meantime both research that directly reduces x-risk (like NEO detection), as well as research that will help elucidate deep and foundational issues in x-risk (FHI and MIRI) should be encouraged. It's a stereotype that research papers always end with a call for more research, but as was pointed out by lukeprog in a talk he gave, there's more research done on lipstick than on friendly AI. This generalizes to x-risk more broadly, and represents the truly worrying state of our priorities.
Conclusion
Though I maintain we should be more fearful of what's to come, that should not obscure the fact that the human potential is vast and truly exciting. If the right steps are taken, we and our descendants will have a future better than most can even dream of. Life spans measured in eons could be spent learning and loving in ways our terrestrial languages don't even have words for yet. The vision of a post-human civilization flinging it's trillions of descendants into the universe to light up the dark is tremendously inspiring. It's worth fighting for.
But we have much work ahead of us.