An important consideration not yet mentioned is that risk mitigation is can be difficult to quantify, compared to disaster relief efforts where if you save a house fill of children, you become a hero. Coupled with the fact that people extrapolate the future using the past (which misses all existential risks), the incentive to do anything about it drops pretty much to nil.
- Hanson's cosmic locusts scenario
Googling found me this commentary
The result is that [interstellar] colonizers will tend to evolve towards something akin to a locust swarm, using all [resources] for colonization and nothing for anything else.
on Robin Hanson's "Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization".
It is one thing to say "Something must be done!" with a tone of righteous superiority. It is another thing entirely to specify what must be done. Many of these risks do not seem existential to me, some (like dystopia) should really be properly buried as ideas (Bostrom actually dismisses this idea in that paper). The ones that do seem realistically existential seem almost impossible to prepare against on any realistic scale - aliens, gray goo, uploads, and massive global warfare/conquest don't seem like they're going to be sensitive to many invest...
A lot of things on the list of risks are "Things I've read about in science fiction." That's no reason to dismiss them, of course, but it does make it easy to put them in the same mental category as other events in science fiction - "interesting but fanciful."
Imagine that the Pope claims that God has issued two new commandments:
Would you then argue that it's not his fault that most Catholics have dirty feet?
You cannot use anthropic principle here. Unless you postulate some really weird distribution of risks unlike any other distribution of anything in the universe (and by the outside view you cannot do that), then if risks were likely we would have many near misses - either barely getting away from total destruction of humanity, or events that caused widespread but not complete destruction. We have neither.
Global warming and Iraq war are tiny problems, vastly below any potential to threaten survival of civilization. Totalitarian regimes have very short half-l...
Re: One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion.
Bostrom and Rees have both written books on the topic - and so presumably stand to gain financially out of promoting the idea that existential risk is something to be concerned about.
It could also be argued that we are probably seeing a sampling bias here - of all the people on the planet, those with the highest estimate of DOOM are those most likely to alert others to the danger. So: their estimates may well be from the very top end of the distribution.
We're really only just now able to identify these risks and start posing theoretical solutions to be attempted. Our ability to recognize and realistically respond to these threats is catching up. I think saying that we lack good self-preservation mechanisms is to criticize a little unfairly.
Re: One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion. We have survived so far, right? (Wrong, actually - anthropic considerations indicate that survival so far is not evidence that we will survive for a lot longer, and technological progress indicates that risks in the future are worse than risks in the past).
Existence is not evidence, but the absence of previous large-scale disasters should certainly count for something. We have no evidence of civilisation previously arising and then collapsing, which we would expect to see if civilisation was fragile.
Many disasters that would be sufficient to wreck civilization will probably leave at least some survivors. The inhabitants of Easter Island ended up pretty screwed, but they didn't go extinct. Similarly, the collapse of the Mayan civilization left plenty of people left alive, many of which ended up settling in a different area than the former center of civilization. If a major disaster occurs that doesn't manage to kill off basically all animal life on Earth, I suspect that there will still be at least a few people carrying on one hundred years later, even if they have to live as subsistence farmers or hunter-gatherers.
Libertarianism is the best available self-preservation mechanism. It is the social and memetic equivalent of genetic behavioral dispersion; that members of many species behave slightly differently which reduces the likelihood of a large percentage falling to the same cause.
Self-perpetuation in the strictest sense isn't always the point. The goal isn't to simply impose the same structure onto the future over and over again. It's continuity between structures that's important.
Wanting to live a long life isn't the same as having oneself frozen so that the same physical configuration of the body will persist endlessly. The collapse of ecosystems over a hundred-million-year-long timespan is not a failure, no more than our changing our minds constitutes a failure of self-preservation.
In many of the hypothetical "disasters", civilisation doesn't end - it is just that it is no longer led by humans. That seems a practically inevitable long-term outcome to me (humans are rather obviously too primitive and slug-like to go the distance).
The classification of such outcomes as "disasters" needs a serious rethink, IMO.
Re: technological progress indicates that risks in the future are worse than risks in the past
Technological progress has led to the current 6 billion backup copies of the human genome. Yet you argue it leads to increased risk? I do not follow your thinking. Surely technological progress has decreased existential risks, making civilisation's survival substantially more likely.
The prospect of a dangerous collection of existential risks and risks of major civilizational-level catastrophes in the 21st century, combined with a distinct lack of agencies whose job it is to mitigate against such risks probably indicates that the world might be in something of an emergency at the moment. Firstly, what do we mean by risks? Well, Bostrom has a paper on existential risks, and he lists the following risks as being "most likely":
To which I would add various possibilities for major civilization-level disasters that aren't existential risks, such as milder versions of all of the above, or the following:
This collection is daunting, especially given that the human race doesn't have any official agency dedicated to mitigating risks to its own medium-long term survival. We face a long list of challenges, and we aren't even formally trying to mitigate many of them in advance, and in many past cases, mitigation of risks occurred on a last-minute, ad-hoc basis, such as individuals in the cold war making the decision not to initiate a nulcear exchange, particularly in the Cuban missile crisis.
So, a small group of people have realized that the likely outcome of a large and dangerous collection of risks combined with a haphazard, informal methodology for dealing with risks (driven by the efforts of individuals, charities and public opinion) is that one of these potential risks will actually be realized - killing many or all of us or radically reducing our quality of life. This coming disaster is ultimately not the result of any one particular risk, but the result of the lack of a powerful defence against risks.
One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion. We have survived so far, right? (Wrong, actually - anthropic considerations indicate that survival so far is not evidence that we will survive for a lot longer, and technological progress indicates that risks in the future are worse than risks in the past). Major civilizational disasters have already happened many, many times over.
Most ecosystems that ever existed were wiped out by natural means, almost all species that have ever existed have gone extinct, and without human intervention most existing ecosystems will probably be wiped out within a 100 million year timescale. Most civilizations that ever existed, collapsed. Some went really badly wrong, like communist Russia. Complex, homeostatic objects that don't have extremely effective self-preservation systems empirically tend to get wiped by the churning of the universe.
Our western civilization lacks an effective long-term (order of 50 years plus) self-preservation system. Hence we should reasonably expect to either build one, or get wiped out, because we observe that complex systems which seem similar to societies today - such as past societies - collapsed.
And even though our society does have short-term survival mechanisms such as governments and philanthropists, they often behave in superbly irrational, myopic or late-responding ways. It seems that the response to the global warming problem (late-responding, weak, still failing to overcome co-ordination problems) or the invasion of Iraq (plain irrational) are cases in point from recent history, and that there are numerous examples from the past, such as close calls in the cold war, and the spectacular chain of failures that led from world war I to world war II and the rise of Hitler.
This article could be summarized as follows:
The systems we have for preserving the values and existence of our western society, and the human race as a whole are weak, and the challenges of the 21st-22nd century seem likely to overwhelm them.
I originally wanted to write an article about ways to mitigate existential risks and major civilization-level catastrophes, but I decided to first establish that there are actually such things as serious existential risks and major civilization-level catastrophes, and that we haven't got them handled yet. My next post will be about ways to mitigate existential risks.