There are also some examples of anti-sleepwalk bias:
It is not quite clear to me whether you are here just talking about instances of sleepwalking, or whether you are also talking about a predictive error indicating anti-sleepwalking bias: i.e. that they wrongly predicted that the relevant actors would act, yet they sleepwalked into a disaster.
Also, my claim is not that sleepwalking never occurs, but that people on average seem to think that it happens more often than it actually does.
Related: Scott Adams' Law of Slow Moving Disasters
"whenever humanity can see a slow-moving disaster coming, we find a way to avoid it. Let’s run through some examples:
Thomas Malthus famously predicted that the world would run out of food as the population grew. Instead, humans improved their farming technology.
When I was a kid, it was generally assumed that the world would be destroyed by a global nuclear war. The world has been close to nuclear disaster a few times, but so far we’ve avoided all-out nuclear war.
The world was supposed to run out of oil by now, but instead we keep finding new ways to extract it from the ground. The United States has unexpectedly become a net provider of energy.
The debt problem in the United States was supposed to destroy the economy. Instead, the deficit is shrinking, the stock market is surging, and the price of gold is plummeting."
The debt problem in the United States was supposed to destroy the economy. Instead, the deficit is shrinking
Heh. Notice a subtle substitution here :-) The deficit might be shrinking, but the debt keeps on growing.
Also, Scott Adams' list looks like the list of hysterics that a variety of Nervous Nellies threw over the last half a century or so. Media loves declaring a impending disaster for obvious reasons. How many of them were actual slow-moving disasters coming?
Thanks. My claim is somewhat different, though. Adams says that "whenever humanity can see a slow-moving disaster coming, we find a way to avoid it". This is an all-things-considered claim. My claim is rather that sleepwalk bias is a pro-tanto consideration indicating that we're too pessimistic about future disasters (perhaps especially slow-moving ones). I'm not claiming that we never sleepwalk into a disaster. Indeed, there might be stronger countervailing considerations, which if true would mean that all things considered we are too optimistic about existential risk.
Thomas Malthus famously predicted that the world would run out of food as the population grew. Instead, humans improved their farming technology.
I feel that's a bad framing of the what Malthus said. He predicted the world would run out of food if the population grew without limit, which he said didn't have to happen, and has not in fact happened. The Wikipedia article presents a more nuanced view:
Malthus argued that two types of checks hold population within resource limits: positive checks, which raise the death rate; and preventive ones, which lower the birth rate. The positive checks include hunger, disease and war; the preventive checks, abortion, birth control, prostitution, postponement of marriage and celibacy.
If humans today kept having as much children as they biologically can, and if no other 'negative factor' constrained population size, then hunger eventually would. Our food production technology couldn't keep up if we literally filled all available space with humans; eventually there would be no space left to grow food.
When I was a kid, it was generally assumed that the world would be destroyed by a global nuclear war. The world has been close to nuclear disaster a few times, but so far we’ve avoided all-out nuclear war.
I don't think that "after decades of Cold War standoff and several very close brushes with a nuclear launch, the Soviet Union peacefully fell apart, greatly surprising everyone" counts as "we saw a disaster coming and found a way to avoid it".
I'm not sure. It seems important to see whether there is sleepwalk bias is to try and gather a representative sample of predictions/warnings and see how they go. Yet this is pretty hard to do: I can think of examples (like those mentioned in the post) where the disaster was averted, but I can think of others where the disaster did happen despite warnings (I'd argue climate change fits into this category, for example).
This comment on Scott Adams' blog gives some suggestions:
World War II? Decimation of major fisheries? Genocide of North American natives? Sort of by definition, we haven't wiped out the whole human race yet, but we have endured some seriously bad disasters that people saw coming - and a certain amount of "that wasn't so bad" comes from the perspective of the survivors and the children who never knew how good things were before the disaster hit. Maybe loss of the chestnut trees isn't a big deal to people today, but if chestnuts were important to you - things certainly aren't looking so good in the North American chestnut department anymore.
the disaster did happen despite warnings (I'd argue climate change fits into this category, for example).
Disaster did happen?
It's ongoing with no sign of stopping. See coral reefs, the slowing of the North Atlantic circulation, the fact that the whole southern half of the American Great Plains will dry up and blow away starting in a few decades when the fossil aquifers (the pumping of which is the only thing keeping them from turning to desert at modern temperatures) dry up, etc.
Connected to: The Argument from Crisis and Pessimism Bias
When we predict the future, we often seem to underestimate the degree to which people will act to avoid adverse outcomes. Examples include Marx's prediction that the ruling classes would fail to act to avert a bloody revolution, predictions of environmental disasters and resource constraints, y2K, etc. In most or all of these cases, there could have been a catastrophe, if people had not acted with determination and ingenuity to prevent it. But when pressed, people often do that, and it seems that we often fail to take that into account when making predictions. In other words: too often we postulate that people will sleepwalk into a disaster. Call this sleepwalk bias.
What are the causes of sleepwalk bias? I think there are two primary causes:
Cognitive constraints. It is easier to just extrapolate existing trends than to engage in complicated reasoning about how people will act to prevent those trends from continuing.
Predictions as warnings. We often fail to distinguish between predictions in the pure sense (what I would bet will happen) and what we may term warnings (what we think will happen, unless appropriate action is taken). Some of these predictions could perhaps be interpreted as warnings - in which case, they were not as bad as they seemed.
However, you could also argue that they were actual predictions, and that they were more effective because they were predictions, rather than warnings. For, more often than not, there will of course be lots of work to reduce the risk of disaster, which will reduce the risk. This means that a warning saying that "if no action is taken, there will be a disaster" is not necessarily very effective as a way to change behaviour - since we know for a fact that action will be taken. A prediction that there is a high probability of a disaster all things considered is much more effective. Indeed, the fact that predictions are more effective than warnings might be the reason why people predict disasters, rather than warn about them. Such predictions are self-defeating - which you may argue is why people make them.
In practice, I think people often fail to distinguish between pure predictions and warnings. They slide between these interpretations. In any case, the effect of all this is for these "prediction-warnings" to seem too pessimistic qua pure predictions.
The upshot for existential risk is that those suffering from sleepwalk bias may be too pessimistic. They fail to appreciate the enormous efforts people will make to avoid an existential disaster.
Is sleepwalk bias common among the existential risk community? If so, that would be a pro tanto-reason to be somewhat less worried about existential risk. Since it seems to be a common bias, it would be unsurprising if the existential risk community also suffered from it. On the other hand, they have thought about these issues a lot, and may have been able to overcome it (or even overcorrect for it)
Also, even if sleepwalk bias does indeed affect existential risk predictions, it would be dangerous to let this notion make us decrease our efforts to reduce existential risk, given the enormous stakes, and the present neglect of existential risk. If pessimistic predictions may be self-defeating, so may optimistic predictions.
[Added 24/4 2016] Under which circumstances can we expect actors to sleepwalk? And under what circumstances can we expect that people will expect them to sleepwalk, even though they won't? Here are some considerations, inspired by the comments below. Sleepwalking is presumably more likely if:
1, 2 and, in a way, 3, have to do with observing the disaster in time to act, whereas 4 and 5 have to do with ability to act once the problem is identified.
On the second question, my guess would be that people in general do not differentiate sufficiently between scenarios where sleepwalking is plausible and those where it is not (i.e. predicted sleepwalking has less variance than actual sleepwalking). This means that we sometimes probably underestimate the amount of sleepwalking, but more often, if my main argument is right, we overestimate it. An upshot of this is that it is important to try to carefully model the amount of sleepwalking that there will be regarding different existential risks.