Analysts of humanity's future sometimes use the word "doom" rather loosely. ("Doomsday" has the further problem that it privileges a particular time scale.) But doom sounds like something important; and when something is important, it's important to be clear about what it is.
Some properties that could all qualify an event as doom:
- Gigadeath: Billions of people, or some number roughly comparable to the number of people alive, die.
- Human extinction: No humans survive afterward. (Or, modified: no human-like life survives, or no sentient life survives, or no intelligent life survives.)
- Existential disaster: Some significant fraction, perhaps all, of the future's potential moral value is lost. (Coined by Nick Bostrom, who defines an existential risk as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential", which I interpret to mean the same thing.)
- "Doomsday argument doomsday": The total number of observers (or observer-moments) in existence ends up being small – not much larger than the total that have existed in the past. This is what we should believe if we accept the Doomsday argument.
- Great filter: Earth ends up not colonizing the stars, or doing anything else widely visible. If all species are filtered out, this explains the Fermi paradox.
Examples to illustrate that these properties are fundamentally different:
- If billions die (1), humanity may still recover and not go extinct (2), retain most of its potential future value (3), spawn many future observers (4), and colonize the stars (5). (E.g., nuclear war, but also aging.)
- If cockroaches or Klingon colonists build something even cooler afterward, human extinction (2) isn't an existential disaster (3), and conversely, the creation of an eternal dystopia could be an existential disaster (3) without involving human extinction (2).
- Human extinction (2) doesn't imply few future observers (4) if it happens too late, or if we're not alone; and few future observers (4) doesn't imply human extinction (2) if we all live forever childlessly. (It's harder to find an example of few observer-moments without human extinction, short of p-zombie infestations.)
- If we create an AI that converts the galaxy to paperclips, humans go extinct (2) and it's an existential disaster (3), but it isn't part of the great filter (5). (For an example where all intelligence goes extinct, implying few future observers (4) for any definition of "observer", consider physics disasters that expand at light speed.) If our true desire is to transcend inward, that's part of the great filter (5) without human extinction (2) or an existential disaster (3).
- If we leave our reference class of observers for a more exciting reference class, that's a doomsday argument doomsday (4) but not an existential disaster (3). The aforementioned eternal dystopia is an existential disaster (3) but implies many future observers (4).
- Finally, if space travel is impossible, that's a great filter (5) but compatible with many future observers (4).
Righly so, since the SIA is false.
The Doomsday argument is correct as far as it goes, though my view of the most likely filter is environmental degradation + AI will have problems.