Related: Existential Risk, 9/26 is Petrov Day
Existential risks—risks that, in the words of Nick Bostrom, would "either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential," are a significant threat to the world as we know it. In fact, they may be one of the most pressing issues facing humanity today.
The likelihood of some risks may stay relatively constant over time—a basic view of asteroid impact is that there is a certain probability that a "killer asteroid" hits the Earth and that this probability is more or less the same every year. This is what I refer to as a "stable risk."
However, the likelihood of other existential risks seems to fluctuate, often quite dramatically. Many of these "unstable risks" are related to human activity.
For instance, the likelihood of a nuclear war at sufficient scale to be an existential threat seems contingent on various geopolitical factors that are difficult to predict in advance. That said, the likelihood of this risk has clearly changed throughout recent history. Nuclear war was obviously not an existential risk before nuclear weapons were invented, and was fairly clearly more of a risk during the Cuban Missile Crisis than it is today.
Many of these unstable, human-created risks seem based largely on advanced technology. Potential risks like gray goo rely on theorized technologies that have yet to be developed (and indeed may never be developed). While this is good news for the present day, it also means that we have to be vigilant for the emergence of potential new threats as human technology increases.
GiveWell's recent conversation with Carl Shulman contains some arguments as to why the risk of human extinction may be decreasing over time. However, it strikes me as perhaps more likely that the risk of human extinction is increasing over time—or at the very least becoming less stable—as technology increases the amount of power available to individuals and civilizations.
After all, the very concept of human-created unstable existential risks is a recent one. Even if Julius Caesar, Genghis Khan, or Queen Victoria for some reason decided to destroy human civilization, it seems almost certain that they would fail, even given all the resources of their empires.
The same cannot be said for Kennedy or Khrushchev.
Perhaps a better title would be "Known and Unknown Risks", since there is nothing inherently different between "stable" and "unstable" risks.
For example, suppose there is a killer asteroid impact in 2015, yet the asteroid in question is not detected until 2014. Then the likelihood of the extinction rises dramatically at the moment of confirmed detection. If subsequently an emergency-build nuclear-powered asteroid deflector knocks the asteroid off the collision course, the relevant x-risk drops back to the baseline or lower.
Similarly, the likelihood of a nuclear annihilation at any given point can be traced to certain events occurring (or becoming known to the risk estimator), like discovery of nuclear fission, nuclear arms race, invention of ICBMs, shooting down the U-2 spy plane, etc.
I consider the difference to be extremely important to future decisionmaking, so I'm confused as to why you think this is the case. Can you explain further?