Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Our society lacks good self-preservation mechanisms

12 [deleted] 12 July 2009 09:26AM

The prospect of a dangerous collection of existential risks and risks of major civilizational-level catastrophes in the 21st century, combined with a distinct lack of agencies whose job it is to mitigate against such risks probably indicates that the world might be in something of an emergency at the moment. Firstly, what do we mean by risks? Well, Bostrom has a paper on existential risks, and he lists the following risks as being "most likely":

  • Deliberate misuse of nanotechnology,
  • Nuclear holocaust,
  • Badly programmed superintelligence,
  • Genetically engineered biological agent,
  • Accidental misuse of nanotechnology (“gray goo”),
  • Physics disasters,
  • Naturally occurring disease,
  • Asteroid or comet impact,
  • Runaway global warming,
  • Resource depletion or ecological destruction,
  • Misguided world government or another static social equilibrium stops technological progress,
  • “Dysgenic” pressures (We might evolve into a less brainy but more fertile species, homo philoprogenitus “lover of many offspring”)
  • Our potential or even our core values are eroded by evolutionary development,
  • Technological arrest,
  • Take-over by a transcending upload,
  • Flawed superintelligence,
  • [Stable] Repressive totalitarian global regime,
  • Hanson's cosmic locusts scenario [Added by author]

To which I would add various possibilities for major civilization-level disasters that aren't existential risks, such as milder versions of all of the above, or the following:

  • convergence of computer viruses and cults/religions,
  • advanced personal weapons or surveillance devices such as nanotech, micro-UAV bugs (cyberpunk dystopia),
  • erosion of privacy and freedom through massively oppressive government,
  • highly effective meta-religions such as Scientology or a much more virulent version of modern evangelical Christianity

This collection is daunting, especially given that the human race doesn't have any official agency dedicated to mitigating risks to its own medium-long term survival. We face a long list of challenges, and we aren't even formally trying to mitigate many of them in advance, and in many past cases, mitigation of risks occurred on a last-minute, ad-hoc basis, such as individuals in the cold war making the decision not to initiate a nulcear exchange, particularly in the Cuban missile crisis.

So, a small group of people have realized that the likely outcome of a large and dangerous collection of risks combined with a haphazard, informal methodology for dealing with risks (driven by the efforts of individuals, charities and public opinion) is that one of these potential risks will actually be realized - killing many or all of us or radically reducing our quality of life. This coming disaster is ultimately not the result of any one particular risk, but the result of the lack of a powerful defence against risks.

One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion. We have survived so far, right? (Wrong, actually - anthropic considerations indicate that survival so far is not evidence that we will survive for a lot longer, and technological progress indicates that risks in the future are worse than risks in the past). Major civilizational disasters have already happened many, many times over.

Most ecosystems that ever existed were wiped out by natural means, almost all species that have ever existed have gone extinct, and without human intervention most existing ecosystems will probably be wiped out within a 100 million year timescale. Most civilizations that ever existed, collapsed. Some went really badly wrong, like communist Russia. Complex, homeostatic objects that don't have extremely effective self-preservation systems empirically tend to get wiped by the churning of the universe.

Our western civilization lacks an effective long-term (order of 50 years plus) self-preservation system. Hence we should reasonably expect to either build one, or get wiped out, because we observe that complex systems which seem similar to societies today - such as past societies - collapsed.

And even though our society does have short-term survival mechanisms such as governments and philanthropists, they often behave in superbly irrational, myopic or late-responding ways. It seems that the response to the global warming problem (late-responding, weak, still failing to overcome co-ordination problems) or the invasion of Iraq (plain irrational) are cases in point from recent history, and that there are numerous examples from the past, such as close calls in the cold war, and the spectacular chain of failures that led from world war I to world war II and the rise of Hitler.

This article could be summarized as follows:

The systems we have for preserving the values and existence of our western society, and the human race as a whole are weak, and the challenges of the 21st-22nd century seem likely to overwhelm them.

I originally wanted to write an article about ways to mitigate existential risks and major civilization-level catastrophes, but I decided to first establish that there are actually such things as serious existential risks and major civilization-level catastrophes, and that we haven't got them handled yet. My next post will be about ways to mitigate existential risks.


Comments (105)

Comment author: AndrewH 12 July 2009 05:24:31PM 5 points [-]

An important consideration not yet mentioned is that risk mitigation is can be difficult to quantify, compared to disaster relief efforts where if you save a house fill of children, you become a hero. Coupled with the fact that people extrapolate the future using the past (which misses all existential risks), the incentive to do anything about it drops pretty much to nil.

Comment author: Vladimir_Nesov 12 July 2009 03:33:01PM 4 points [-]

Created a Wiki page: Existential risk.

Comment author: Bo102010 12 July 2009 02:53:05PM 2 points [-]

A lot of things on the list of risks are "Things I've read about in science fiction." That's no reason to dismiss them, of course, but it does make it easy to put them in the same mental category as other events in science fiction - "interesting but fanciful."

Comment author: Alicorn 12 July 2009 03:29:50PM *  2 points [-]

Beware generalizing from fictional evidence. "Dysgenic pressures" in particular don't seem like they're actually worth fearing in reality, given the Flynn effect, no matter how many times you've seen Idiocracy.

Comment author: Z_M_Davis 12 July 2009 06:56:56PM *  7 points [-]

Also beware that reversed stupidity is not intelligence. The existence of the Flynn effect does not imply that "dysgenic" or "eugenic" (scare quotes because there's no value-neutral way to say what counts as an improvement) trends aren't worth thinking about. Suppose hypothetically that genetic trends were leading to lowered average potential intelligence, but that this effect was exactly cancelled by an environmental Flynn effect. This is only a win if you think the status quo is optimal; if you think that more intelligence is better within the range we can apprehend, then IQ not rising fast enough is sad for the same reason that falling IQ would be sad. Cf. the reversal test.

Comment deleted 12 July 2009 05:39:08PM *  [-]
Comment author: Z_M_Davis 12 July 2009 07:06:46PM *  3 points [-]

The fact that reproducation in our society is now only caused by wanting to reproduce is disturbing

It's disturbing that people have more control over their lives? Why? Because it will result in slightly lower average IQ in the medium term? Because it means our descendants will be monomaniacal fitness-maximizers rather than eudamonic agents in the long-long term?

Comment author: steven0461 12 July 2009 07:30:13PM *  8 points [-]

Parents don't just pass their genes on to their children, they pass on some of their ideas. "Dysmemics" seems a bigger problem than "dysgenics".

Comment author: Alicorn 12 July 2009 06:12:40PM 2 points [-]

This fact you speak of is false, unless by "wanting to reproduce" you actually mean "sexually active, either voluntarily or not, and either inclined to reproduce or poorly informed about or unable to access birth control or unlucky and not so unwilling to reproduce that one will get around any and every obstacle to abortion including psychological attacks, physical prevention, massive social stigma, expense, and physical and emotional pain". Or for the male version, "careless" would do.

Comment deleted 12 July 2009 06:19:32PM *  [-]
Comment author: Alicorn 12 July 2009 06:22:43PM 1 point [-]

...I'm saying that people can and do reproduce without wanting to, even in our society. It is simply not the case that reproduction is caused only by wanting to reproduce.

Comment deleted 12 July 2009 06:38:10PM [-]
Comment author: Alicorn 12 July 2009 06:56:47PM 2 points [-]

I think that's a completely inappropriate classification of the catchall "not wanting to get an abortion". It's rarely medically necessary, and it's painful and expensive, even pro-choicers have qualms about it sometimes, it carries enough of a stigma that it can be dangerous for reasons beyond medical complication - there are so many reasons not to have an abortion that it's not at all difficult to imagine a woman whose desire not to reproduce is thereby outweighed, even if you're dismissing as stupid all of the possible religious objections.

Comment deleted 12 July 2009 09:53:45PM *  [-]
Comment author: Aurini 17 July 2009 02:12:54AM 0 points [-]

Note that optional reproduction doesn't have to be 100% true for Roko's premise to hold. Even if 75% of children are 'oops babies' that other 25% will have significant effects on the gene distribution (or rather, the vast multitude that weren't born because of people exercising choice will have an effect)

Comment author: thomblake 12 July 2009 10:05:24PM 0 points [-]

I'm not sure what you mean here. How is it different now from any other period in history, and what effect do you think that'll have?

Comment deleted 12 July 2009 10:08:52PM [-]
Comment author: MBlume 14 July 2009 12:28:13AM 4 points [-]

To whoever voted this comment down: did your brain provide a particular reason that it was unnecessary to worry about catholics taking over the world by having babies, or did it just output a feeling that it was somehow wrong -- maybe even racist -- to worry about such things?

Comment author: Alicorn 14 July 2009 12:48:07AM 1 point [-]

(Not the downvoter.) Racist? Catholics are not a race.

Comment author: Furcas 14 July 2009 12:56:21AM 3 points [-]

Some minds tend to jump to the "Racist!" accusation every time they hear a disparaging comment about a group of people, regardless of what those people have in common.

Comment author: MBlume 14 July 2009 12:54:31AM *  2 points [-]

I wasn't implying that it was a sensible feeling -- I was just describing a sort of internal flinch.

ETA: Here in California, it is to some extent a race issue. We have a large and growing Hispanic population, who are very strongly catholic. If that population continues to grow, without moderation of their religious leanings, it could significantly impact the politics of the state.

Comment author: orthonormal 14 July 2009 01:18:45AM 1 point [-]

The very fact that you're denying that it's racist is EVEN MORE RACIST!

P.S. I make no apologies for my recent trend in comment quality...

Comment author: Jack 13 July 2009 05:48:39PM 1 point [-]

Catholics really aren't that bad.

Comment deleted 13 July 2009 11:06:41PM [-]
Comment author: thomblake 18 July 2009 08:32:40PM -1 points [-]

It hardly seems like the Pope can be blamed for AIDS-related deaths based on people not using condoms. Given that he advocates "Use abstinence and don't use condoms", and the effectiveness of abstinence is not increased by using condoms, following his advice will not lead to more AIDS. If people follow the advice "Don't use abstinence and don't use condoms" then they're not following his advice and I don't see why he should be blamed for it.

If not being abstinent was a live option for Catholics, then I'm sure condoms would be reconsidered. However, if people are already going to disregard his advice regarding abstinence, I don't see why he should have to give them more advice about what to do in that case.

Comment author: Furcas 18 July 2009 08:44:46PM *  8 points [-]

Imagine that the Pope claims that God has issued two new commandments:

  • Walk on your hands at all times.
  • Never wear shoes.

Would you then argue that it's not his fault that most Catholics have dirty feet?

Comment author: thomblake 18 July 2009 08:49:19PM 1 point [-]

Indeed I would. I would in that case make fun of Catholics for following such a silly religion, and happily tell people who didn't follow one or both of those that they're being bad Catholics. But for anyone who follows the walk-on-your-hands-all-the-time religion, it's certainly their own fault if they're not up to the task.

Comment author: Alicorn 18 July 2009 08:37:35PM 1 point [-]

People who follow, or try to follow, the whole of the Pope's advice can work to reduce the availability and social acceptability of condoms, which will reduce condom use among people who may or may not care what the Pope has to say. Additionally, since abstinence is apparently very difficult for a lot of people, trying to be abstinent will not reliably result in abstinence; I suspect the number of people who go "well, I can't seem to manage abstinence, but at least I'm not using condoms! That part's easy!" is depressingly high.

Comment author: thomblake 18 July 2009 08:42:31PM *  1 point [-]

"well, I can't seem to manage abstinence, but at least I'm not using condoms! That part's easy!"

I don't see why any such person would continue calling himself a Catholic in that situation. Clearly the options there are 'not a Catholic' or 'Catholic who believes he's going to Hell'. And non-Catholics shouldn't listen to the Pope at all.

It might be worth saying that Catholicism is somehow harmful to society, but it's hardly a fault of the Pope that he informs people about Catholic doctrine.

Comment author: timtyler 15 July 2009 07:19:04AM 0 points [-]

That's more than balanced by extra births - if the example of Catholics taking over the world by having more children on average has anything to it. The Pope's strategey encourages risk - but the overall effect is positive in terms of helping Catholocism spread. With 1 billion members it must be doing something right.

Comment deleted 15 July 2009 07:21:49AM [-]
Comment author: timtyler 15 July 2009 07:46:04AM *  -1 points [-]

Well, there's the cryonics death cult. Those guys think that, if you perform expensive rituals over your dead body, it might live forever in paradise.

It's like the Egyptian pharoes have been reincarnated ;-)

Comment author: nerzhin 14 July 2009 09:29:57PM 0 points [-]

millions of deaths in Africa

According to this Wikipedia page, there were maybe 2.4 million deaths due to AIDS in the whole world in 2007. I doubt the Pope was responsible for most of them.

Comment author: infotropism 14 July 2009 09:33:54PM 5 points [-]

How many deaths, directly or indirectly derived from the pope's prohibition, would be enough for his influence to be considered negative in this case ?

Comment author: timtyler 15 July 2009 07:07:30AM 0 points [-]

Catholics have a religion that help them reproduce in the modern world. They may well be more valuable in nature's eyes than the screw-ups who allow their reproductive potential to be sabotaged by their unfamiliar environment. However, Catholicism is not the only system of thought that promotes family values in modern times. See the Amish.

Comment author: Psychohistorian 14 July 2009 01:54:00AM 1 point [-]

I actually find the inclusion of this as a "most likely" scenario mildly offensive, since Bostrom explicitly says he finds it seriously improbable:

"In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot."

There's something... unpalatable about intellectuals bemoaning the "lesser folk" from breeding the species into oblivion, particularly when it seems to contradict both evidence and theory.

Comment author: infotropism 12 July 2009 04:38:54PM 1 point [-]

Though that doesn't immediately make it non fictional evidence, dysgenic pressure (as well as the flynn effect and the possibility of genetic engineering as possible counters) is also being briefly mentioned in Nick Bostrom's fundamental paper Existential Risks - 5.3.

Comment author: arundelo 12 July 2009 04:11:30PM 3 points [-]
  • Hanson's cosmic locusts scenario

Googling found me this commentary

The result is that [interstellar] colonizers will tend to evolve towards something akin to a locust swarm, using all [resources] for colonization and nothing for anything else.

on Robin Hanson's "Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization".

Comment author: timtyler 14 July 2009 08:37:38PM 0 points [-]

I sometime wonder why people think this outcome is bad. It is what we will probably get - unless we manage to eliminate competetion and overrule natural selection. In that case, we will still probably get something very similar - since expansion is probably the best way to defend yourself against aliens.

Comment author: infotropism 14 July 2009 09:00:24PM 2 points [-]

I sometime wonder why people think this outcome is bad.

Mind if I ask, but, as opposed to considering it good ?

Comment author: timtyler 14 July 2009 11:03:58PM 1 point [-]

Indeed. Successfully colonising space is conventionally part of our Glorious Future.

Comment author: Psychohistorian 12 July 2009 10:41:26PM *  2 points [-]

It is one thing to say "Something must be done!" with a tone of righteous superiority. It is another thing entirely to specify what must be done. Many of these risks do not seem existential to me, some (like dystopia) should really be properly buried as ideas (Bostrom actually dismisses this idea in that paper). The ones that do seem realistically existential seem almost impossible to prepare against on any realistic scale - aliens, gray goo, uploads, and massive global warfare/conquest don't seem like they're going to be sensitive to many investments we make now, since they're either too small and specific or too large and non-specific to address generally.

You also forgot to list the biggest problem: "Something Unforseen."

It's not terribly constructive to say we lack good self-preservation mechanisms without at least hand-waving what good self preservation mechanisms might look like and how we could theoretically try to start having them. The mere fact that we could all die at any moment is not much of a cause for alarm if there's really nothing we can do about it.

Edit: My general point is clarified in a response to a response to this post.

Comment deleted 13 July 2009 09:49:18AM *  [-]
Comment author: Psychohistorian 13 July 2009 07:26:44PM *  3 points [-]

Our western civilization lacks an effective long-term (order of 50 years plus) self-preservation system. Hence we should reasonably expect to either build one, or get wiped out.

This is a huge claim. You're claiming first of all that the odds of succumbing to a truly existential event are higher than not. You don't (IMO) provide evidence to support this - you provide some evidence that we may have had really catastrophic events in the past, but, again, only 100% is existential, and you do not finish off your examples - "Hitler could have won" and "Hitler could have won and created a repressive regime that lasted for the remainder of human history" are two very different claims, and the former is not existential. Second, you claim that if we take some steps, we can expect such events not to happen - it is because we lack an "effective long-term preservation system" that we can expect to be destroyed completely.

Thus, you have, to my understanding, made two claims: one about the likelihood of existential events, and one about the likelihood of us being able to mitigate them. Again, to my evaluation, you have provided compelling evidence for neither of these conclusions; indeed you've provided virtually no evidence for either of these conclusion (probability, not possibility). That is the root of my criticism of not providing solutions: you claim solutions are possible, desirable, and effective, and you do not provide any evidence to support this claim.

Thus, my criticism of your tone as "righteous" is because you seem to be making a strong, "deep" claim without providing adequate supporting evidence or argument. It is not a criticism of your word choice. I have absolutely no problem with people posting about problems that occur to them that they don't know how to solve. I do have a problem with people making strong claims with a definitive tone without providing adequate supporting evidence.

I admit this may all hinge on a disagreement in definition over "existential." I take existential to require true obliteration. Gray goo would reach this, as would the-simulation-loses-power or every-atom-splits or humanity-is-enslaved-by-something-forever. "Nuclear holocaust kills billions and it takes ten thousand years to recover" does not count in my mind, as it is not terminal. Similarly, "Hitler reigns for ten thousand years" is also non-existential (at least for humanity as a whole); If recovery occurs, even after a fairly large gap, it does not seem to count as existential. This view is consistent with Bostrom's definition in the linked paper. With a weaker definition of existential, it is quite possible that there is no disagreement here, in which I have the (smaller) criticism that you should have clarified this at the beginning.

Comment deleted 13 July 2009 11:34:19PM [-]
Comment author: timtyler 14 July 2009 11:18:44PM *  1 point [-]

If we had actually had 6 "near misses", then that would be pertinent evidence. In which case, maybe they should be listed, their probabilities and potential impact estimated.

Comment author: Psychohistorian 14 July 2009 01:42:32AM *  1 point [-]

truly existential event

I now get what lead to this confusion. You've referred to both "existential" and "major civilizational-level catastrophes" without much effort to distinguish between the two, though they differ in both extent and probability by a few orders of magnitude. I assumed from the Bostrom paper citation and the long list of existential threats that the article in general was about existential risks, which, on a rereading, it isn't.

My concern over showing that something could reasonably be done remains, but you do provide appropriate evidence regarding civilization-level catastrophes. It might be worth a sentence or two clarifying that your concern is civ-level or greater, rather than specifically existential, though I may be the only one who misread the focus here.

Comment deleted 13 July 2009 12:07:50AM [-]
Comment author: Eliezer_Yudkowsky 13 July 2009 01:06:08AM 5 points [-]

I don't think that's the main thrust of his complaint. Lack of specifics is the main problem. If you say "Something must be done!" but not what, then the tone of the writing is moot, so far as righteousness-detectors go.

Comment author: taw 12 July 2009 10:31:04AM *  1 point [-]

You cannot use anthropic principle here. Unless you postulate some really weird distribution of risks unlike any other distribution of anything in the universe (and by the outside view you cannot do that), then if risks were likely we would have many near misses - either barely getting away from total destruction of humanity, or events that caused widespread but not complete destruction. We have neither.

Global warming and Iraq war are tiny problems, vastly below any potential to threaten survival of civilization. Totalitarian regimes have very short half-lifes. Threat of the Cuban missile crisis seems vastly overstated, especially considering how many wars by proxy United Stated and Soviet Union fought without getting anywhere close to using nuclear weapons, and how nothing indicated intention of either party to resort to nuclear attack.

Communist Russia didn't went that badly by historical standards - standards of living when it ended were a lot higher than standard of living when it started, and if it shows anything is how remarkably resistant civilization is, restoring itself so smoothly after Stalin in such a hostile environment. You see the same pattern in China and so many other totalitarian regimes worldwide - how they get softer and more civilized given time, peace, and economic prosperity. We seem very well protected here.

Comment author: CarlShulman 12 July 2009 04:33:42PM 4 points [-]

Agreed that we have evidence about the distribution of risks for asteroids, nuclear war, etc, based on historical data. But we also have empirical experience with disasters that follow power-laws, so that most of the expected damage comes from the most extreme disasters:


Comment author: John_Maxwell_IV 12 July 2009 04:28:07PM 2 points [-]

It seems reasonable to me that the distribution of risks will change as technology improves. And technology is improving faster and faster.

Comment author: infotropism 12 July 2009 11:13:04AM 1 point [-]

Well, there possibly was the Toba supereruption, which would fit being a near miss.

Arguably, we were very close too during the cold war, and several times over - not total extinction, but a nuclear war would've left us very crippled.

Comment author: taw 13 July 2009 09:21:08AM 0 points [-]

We're much safer against even very rare natural disasters like Toba (and others that act through climate) than it was historically. The kind of disaster that could wipe as out gets less and less probable every decade. I'm not even sure if the kind of asteroid that wiped out dinosaurs would be enough to wipe out humanity now, given a few years of prior warning (well, it would kill most people, but that's not even close to getting rid of the entire humanity).

I seriously dispute the idea that we were very close to nuclear war. I even more seriously dispute the idea that it would have any long term effects on human civilization if it happened. Even in the middle of WW2 people's life expectancy was far higher than historically typical, violence death rates were far lower, and I'd even take a guess that average personal freedoms compared quite well to the historical record.

Comment author: infotropism 14 July 2009 07:08:41PM 2 points [-]

Whether those catastrophes could destroy present humanity wasn't the point, which was whether or not near misses in potential extinction events have ever occurred during our past.

Consider it that way : under your assumptions of our world being more robust nowadays, what would count as a near miss today, would certainly have wiped the frailer humanity out back then; conversely what counted as a near miss back then, would not be nearly that bad nowadays. This basically means, by constraining the definition of a "near miss" in that way, that it is impossible to show any such near miss in our history. That is at best one step away from saying we're actually safe and shouldn't worry all that much about existential risks.

Speaking of which, when arguing the definition of an existential risk, and from that arguing that such catastrophes as a nuclear war, aren't existential risks, blurs the point. Let us rephrase the question : how much would you want to avoid a nuclear war, or a supereruption, or an asteroid strike ? How much effort, time, money should we put into the cause of avoiding such catastrophes ?

While it is true a catastrophe that doesn't wipe out humanity forever, isn't as bad as one that does, such an event can still be awfully bad, and deserving of our attention and efforts, so as to prevent it. We're talking billions of human lives lost or spent in awful conditions for decades, centuries, millennia, etc. If that is no cause to serious worry, pray tell what is ?

Comment author: taw 14 July 2009 09:46:20PM 0 points [-]

Total extinction has expected value that's pretty much indistinguishable from minus infinity.

Global thermonuclear war? Oh sure it would kill some people but expected number of deaths and amount of suffering from let's say malaria or lack of access to fresh water in the next 100 years is far higher than expected death and suffering from a global thermonuclear war in the next 100 years.

Even our most recent total war, WW2, killed laughably small portion of the fighting population relative to historical norms. There's no reason to suspect WW3 would be any different, so number of deaths would most likely be rather limited. And as countries with low birth rates (that is pretty much all countries today) have historical record of trying very hard not to get into any war that could endanger their population (as opposed to send bombs to other countries and such), chance of such a war is tiny.

So let's say 1% chance of global thermonuclear war killing 100 million people in the next 100 years (expected 1 million deaths) versus 1 million deaths a year from malaria, and 2 from diarrhea. I think we have our priorities wrong if we care about global thermonuclear wars much.

(of course people might disagree with these estimates, in which case they would see a global thermonuclear war as more important issue than me)

Comment author: infotropism 14 July 2009 10:15:49PM 0 points [-]

Under those assumptions your estimates are sound, really. However, should we only count the direct deaths incurred as a consequence of a direct nuclear strike ? Or should we also take into account the nuclear fallout, radiations, nuclear winter, ecosystems crashing down, massive economy and infrastructure disruption, etc. ? How much more worse does it get if we take such considerations into account ?

Aside from those considerations, I really agree with your idea of getting our priorities right, based on numbers. That's exactly the reason why I'd advocate antiagathic research above a lot of other things, which actually kill and make less people suffer than aging itself does, but not everyone seems to agree to that.

Comment author: taw 14 July 2009 11:51:03PM 0 points [-]

Right now 350–500 million people a year suffer from malaria, billions live in places of massive economy and infrastructure disruption, and with health prospects most likely worse than first world person would have in post-thermonuclear-war environment.

I doubt fallout would be that bad in the long term. Sure, there would be higher cancer rate, but people would abandon the most irradiated places, take some precautions, and the overall loss of healthy lifespan would most likely be of the same order of magnitude as a couple of decades of progress of medicine. For all I know people after a potential 2100 thermonuclear warfare might live longer and healthier than us.

Comment deleted 15 July 2009 12:23:09AM *  [-]
Comment author: taw 15 July 2009 02:33:56AM 0 points [-]

By 2100 hopefully we won't have the third world any more.

Swapping nuclear warfare for end of third world poverty would be a good exchange for most people. And nuclear warfare is a remote possibility, while third world poverty is real and here with us now.

Also notice how much better is life in Hiroshima compared to Congo.

Comment author: infotropism 15 July 2009 09:41:12AM 0 points [-]

What should be realized here, however, is that Hiroshima could become a relatively ok place because it could receive a huge amount of help for being part of the country with such a high GDP.

Hiroshima didn't magically get better. A large scale nuclear war would destroy our economy, and thus our capability to respond and patch the damage that way. For that matter, I'm not even sure our undisturbed response systems could be able to deal with more than a few nuked cities. Also please consider that Hiroshima was nuked by a 18 kt bomb, which is nothing like the average 400 - 500 kt nukes we have now.

Comment author: Douglas_Knight 12 July 2009 08:41:02PM 0 points [-]

I think you have some typos in your last paragraph that may reverse some of the meaning. So I can't tell if I agree with you. In particular, I'm concerned with the conjunction of totalitarian issues with standard of living.

It's certainly true that China and the USSR give examples of peaceful rollback of totalitarian regimes (partial in the USSR and complete in China). The USSR looks to me to have had continually increasing standard of living, including under Stalin, with the lone exception of the war. So totalitarian aspects of a regime may be rather independent of wealth.

Comment deleted 12 July 2009 04:56:06PM *  [-]
Comment author: Eliezer_Yudkowsky 12 July 2009 06:40:13PM 4 points [-]

Toba supereruption and genetic bottleneck probably strongest example of near-miss.

Comment author: timtyler 14 July 2009 08:19:55PM *  0 points [-]

The genetic bottleneck around the time of the eruption was not as "near" as all that - in part since there were Neanderthals around at that time as an additional backup mechanism, complementing the surviving humans. Plus, of course, Homo floresiensis! ;-)

http://en.wikipedia.org/wiki/Toba_catastrophe_theory estimates we got down to the last 5,000-10,000 backup copies of the human genome.

Figures from before the eruption appear to have not been dramatically higher:

Scientists from the University of Utah in Salt Lake City in the U.S. have calculated that 1.2 million years ago, at a time when our ancestors were spreading through Africa, Europe and Asia, there were probably only around 18,500 individuals capable of breeding (and no more than 26,000).

There just weren't that many homos around at the time.

Comment deleted 12 July 2009 06:47:46PM *  [-]
Comment author: timtyler 14 July 2009 10:12:21PM *  0 points [-]

The proposed genetic bottleneck around the time of the eruption was long ago - when the human population may have been very small anyway. Today, we have six billion humans. There are better defenses against such things - in terms of stocked underground bunkers. So: a modern volcanic eruption would have to be vastly more destructive to kill all humans. The probabilities involved are miniscule, and shrink with every passing day. It is only because of a "Pascal's wager"-style argument that people can be made to consider such risks.

Comment author: CannibalSmith 12 July 2009 05:51:00PM *  3 points [-]

Nazi Germany only lost WWII because Hitler made very silly mistakes.

I can't find it, but there's an article explaining how the Axis was more or less doomed from the start. In short, United States had twice the production capacity than all other participants combined. I'm saying Hitler's mistakes only hastened the inevitable.

Comment author: gwern 13 July 2009 02:08:15AM 0 points [-]

I'm not sure we should argue politics but... American intervention was not inevitable. Even merely materiale supply wasn't inevitable. There were a number of ways America could've been out of the picture or impotent; one of the cited turning points/mistakes was the failure of the Battle of Britain to bring England to terms, or the escape of their army at Dunkirk.

Letting America into the war was arguably one of Hitler's greatest mistakes (either by commission or omission, and there was even a historical parallel warning against America that Hitler was intimately familiar with - WWI).

America may've been tops in industry, but it's hard to see it launching a transoceanic invasion into Europe with no allied powers closer than... Africa? Asia?

Comment deleted 12 July 2009 06:24:05PM *  [-]
Comment author: gwern 13 July 2009 02:18:04AM *  1 point [-]

Looking at http://en.wikipedia.org/wiki/Military_production_during_World_War_II

I see that the US's GDP (a good proxy, I think, for industrial production, was 800 at the start of the war, while total Axis GDP was 685. The rest of the Allies represented 829. So by itself, the US was 17% more than the entire Axis alliance, and just under half of the Allies (ie. the rest of the world). Pretty impressive.

The last column has the USA at 1474, or >3x total Axis output (466), and is at 64% of Allies. Incidentally, this means at the end of the war, the US was >2x what the Axis were at the beginning of the war. So the US did not have twice what the rest of the world had; but it did have twice the Axis by the end, and presumably this was foreseeable. So we can change Smith's point from being that the USA could industrially epic pwn the Axis, to merely pwn them.

Comment author: Vladimir_Nesov 13 July 2009 10:11:46AM *  1 point [-]
Comment author: gwern 10 October 2010 02:04:11AM 0 points [-]

Thanks; edited.

Comment author: timtyler 14 July 2009 08:25:37PM -2 points [-]

We /did/ nuke each other - in Japan. Some people even died. Civilisation however, did not end. It seems pretty speculative to classify 20th century history as some sort of "near miss". 6 billion humans represents the enormous success of our species - each human is a backup copy of our DNA. To classify this as a "near disaster" seems strange.

Comment deleted 14 July 2009 09:19:28PM [-]
Comment author: timtyler 14 July 2009 09:34:13PM 0 points [-]

That hypothetical explosion never happened. Estimates of its probability seem necessarily speculative to me. If you want to "establish that there are actually such things as serious existential risks and major civilization-level catastrophes" then invoking things that never happened seems like rather weak evidence.

Comment deleted 14 July 2009 11:11:00PM *  [-]
Comment author: timtyler 14 July 2009 11:26:08PM *  0 points [-]

I did - I said your estimate of a "near miss" was "speculative". In fact, the world didn't end, and you haven't presented evidence that that was actually a likely outcome. Calling the "cold war" a "near miss" doesn't count for very much. We had zero use of nuclear weapons in anger during that era.

Comment author: billswift 13 July 2009 07:25:35AM 0 points [-]

Libertarianism is the best available self-preservation mechanism. It is the social and memetic equivalent of genetic behavioral dispersion; that members of many species behave slightly differently which reduces the likelihood of a large percentage falling to the same cause.

Comment author: [deleted] 13 July 2009 07:31:30AM 4 points [-]

Of the eighteen existential risks Bostrom listed, that would help against maybe three. If you disagree, tell me how that would help with any of them other than resource depletion and evolution.

Comment author: jimrandomh 13 July 2009 05:42:03PM -2 points [-]

I would have a much easier time taking libertarianism seriously if its advocates weren't all in obvious affective death spirals. Libertarianism does not handle tragedy of the commons scenarios well at all, and that's exactly what most existential risks are.

Comment author: JGWeissman 13 July 2009 07:20:33PM 6 points [-]

Not all libertarians are in an affectative death spiral, obvious or otherwise. It's true that many are, but I, for example, recognize tragedy of the commons scenarios and accept that some regulation can be useful to mitigate these problems. I believe there are some specific legitimate purposes of government, such as outlawing aggression, internalizing costs, and coordination (e.g., everyone drives on the right side of road, it would have worked for everyone to drive on the left, but as a society we had to pick one and go with it). Further, I think that every law should be validated to be achieving such an objective with minimal intervention.

I understand how you can form this view, seeing all the pro-business conservatives seizing on libertarian rhetoric to oppose regulation, but then neglecting the responsibility part when they want subsidies, or all the people who correctly notice that most laws are counterproductive and then incorrectly conclude that all laws are counterproductive. But when you claim that all advocates of libertarianism are like that, you are attacking a strawman.

Comment author: Psychohistorian 14 July 2009 01:51:26AM 3 points [-]

"Libertarian" doesn't carve out a very precise cluster in people-space any more. Pretty much anyone who's reflexively wary of government intervention in the private market can call herself a libertarian. Some libertarians will support meaningful government intervention in tragedy of commons type problems; some may even go so far as to support some level of government assisted/coerced redistribution of wealth. You can argue 'till you're blue in the face that that's not a "real" libertarian, but usage defines meaning, and I think enough such people self-identify that way that the word has become fairly imprecise.

Comment author: knb 14 July 2009 10:29:09PM *  3 points [-]

I would have a much easier time taking libertarianism seriously if its advocates weren't all in obvious affective death spirals.

This kind of absurdly absolutist statement achieves nothing but display personal animus toward an ideology. It is true that many libertarians are in death spirals, but I know of no political group that does not have large numbers of supporters in Affective Death Spirals. In case you were unaware, this is a universal tendency for idea-based groups.

And politics is the mind-killer, no matter your politics.

I agree with this, though:

Libertarianism does not handle tragedy of the commons scenarios well at all, and that's exactly what most existential risks are.

Comment author: eirenicon 13 July 2009 04:21:56PM 1 point [-]

But is Libertarianism the best available species-preservation mechanism against existential risks like asteroid impact, nuclear holocaust or cosmic locusts?

Comment author: CannibalSmith 12 July 2009 05:29:13PM 1 point [-]

You should put the summary at the beginning of the article.

Comment deleted 12 July 2009 09:51:39PM *  [-]
Comment author: CannibalSmith 13 July 2009 09:29:41AM 0 points [-]

You confused me nonetheless. If you have a summery at the beginning (and I didn't perceive it as such), why do you put another, different one at the and, and mark it as the summary?

Comment author: idlewire 14 July 2009 09:28:14PM 0 points [-]

We're really only just now able to identify these risks and start posing theoretical solutions to be attempted. Our ability to recognize and realistically respond to these threats is catching up. I think saying that we lack good self-preservation mechanisms is to criticize a little unfairly.

Comment author: timtyler 14 July 2009 08:12:29PM 0 points [-]

Re: One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion. We have survived so far, right? (Wrong, actually - anthropic considerations indicate that survival so far is not evidence that we will survive for a lot longer, and technological progress indicates that risks in the future are worse than risks in the past).

Existence is not evidence, but the absence of previous large-scale disasters should certainly count for something. We have no evidence of civilisation previously arising and then collapsing, which we would expect to see if civilisation was fragile.

Comment author: CronoDAS 14 July 2009 03:45:27AM 0 points [-]

Many disasters that would be sufficient to wreck civilization will probably leave at least some survivors. The inhabitants of Easter Island ended up pretty screwed, but they didn't go extinct. Similarly, the collapse of the Mayan civilization left plenty of people left alive, many of which ended up settling in a different area than the former center of civilization. If a major disaster occurs that doesn't manage to kill off basically all animal life on Earth, I suspect that there will still be at least a few people carrying on one hundred years later, even if they have to live as subsistence farmers or hunter-gatherers.

Comment author: timtyler 14 July 2009 08:59:40PM -1 points [-]

Re: technological progress indicates that risks in the future are worse than risks in the past

Technological progress has led to the current 6 billion backup copies of the human genome. Yet you argue it leads to increased risk? I do not follow your thinking. Surely technological progress has decreased existential risks, making civilisation's survival substantially more likely.

Comment author: infotropism 14 July 2009 09:09:46PM 1 point [-]

Technological progress seems to be necessary, but not sufficient to ensure our civilization's long term survival.

Correct me if I'm wrong, but you seem quite adamant on arguing against the idea that our current civilization is in danger of extinction, when so many other people argue the other way around. This seems like it has the potential to degenerate into a fruitless debate, or even a flame war.

Yet you probably have some good points to make; why not think it over, and make a post about it, if your opinion is so different, and substantiated by facts and good reasoning, as I am sure it must be ?

Comment author: timtyler 14 July 2009 09:24:52PM -1 points [-]

That's a function of the venue of this discussion. The blog's founder goes to existential risk conferences - and so here we see the opinions of his supporters.

Doom prophecies are an old phenomenon. The explanation appears to me to be mainly sociological: warning others about risk makes you look as though you are contributing positively. If the risk doesn't actually exist, then it needs manufacturing - so that you can still alert others to the danger.

Of course, the bigger the risk, the more important it is to tell people about it. Existential risks are the "biggest" risks of all - so they are the most important ones to tell people about. Plus, alerting people to the risk might help you to SAVE THE WORLD! Thus the modern success of the "doom" meme.

Comment author: infotropism 14 July 2009 10:04:26PM 1 point [-]

I see your point, sometimes we may have already written the bottom line, and all that comes afterward is trying to justify it.

However, if an existential risk is conceivable, how much would you be ready to pay, or do, to investigate it ? Your answer could plausibly range from nothing, to everything you have. There ought to be a healthy middle there.

I could certainly understand how someone would arrive at saying that the problem isn't worth investigating further, because that person has a definite explanation of why other people care about that particular question, their reason being biased.

I'd for instance think of religion, as an example of that. I wouldn't read the Bible and centuries of apologetics and debates to decide that God does or doesn't exist. I'd just check to see if at first, people started to justify the existence of a god for other reasons than it existing. That's certainly a much more efficient way of looking at the problem.

Is there no sum of money, no amount of effort, however trivial, that could nevertheless be expanded on such an investigation, considering its possible repercussions, however unlikely those seem to be ?

Comment author: timtyler 14 July 2009 10:21:18PM *  -2 points [-]

By all means, discuss the risks we face. However, my council is to bear in mind the sociological explanation for the "the end is nigh" phenomenon. 2012 isn't the first year in which the end of the world has been predictied.

Are you actually concerned about the risk? Or are you attempting to signal to others what a fine fellow you are by alerting them to potential danger. Or is it that you wish to meet and form alliances with other people who want to help with the fine and noble cause of helping to SAVE THE WORLD? We understand the sociological explanation for the "DOOM" bias. Let us therefore exercise due caution in its immediate vicinity.

Comment deleted 14 July 2009 10:46:54PM *  [-]
Comment author: timtyler 14 July 2009 11:00:12PM *  0 points [-]

You mean that the idea that the end of the world is nigh is a classical failed model - much like geocentrism was...?

Comment author: Vladimir_Nesov 14 July 2009 11:12:17PM 0 points [-]

You are not introducing data that distinguishes the hypothesis that the cause of saving the world is worthwhile from the hypothesis that it's but an attire, serving status and rationalization thereof. It's name-calling, not an argument.

Comment author: timtyler 14 July 2009 11:46:02PM *  -1 points [-]

The argument was being made that "so many other people argue the other way around".

Lots of people arguing something is not a very good reason for thinking it is true. Ideas can become popular because they are good at spreading, not because of their truth value.

In the case of risks, it is pretty obvious how this could happen. Warning people about risks has a positive effect on your reputation.

It is well known to psychologists that humans concoct risks when they are not real for signalling purposes:

"Psychologists have dubbed the phenomenon The Boy Who Cried Wolf Effect, named after Aesop's fable about a shepherd who fakes wolf attacks. In real life, experts say, these "shepherds," mostly women, aren't acting out of boredom. These damsels in distress are very often motivated by an intense desire for attention and may feel unfairly neglected by those close to them, often romantic partners. Others are simply crying out to a world they feel ignores them."

I do not know to what extent memetic and evolutionary psychology explanations explain the observed effect. My estimate - from what I have seen - is that the extent is probably quite large. So: I think that discussion of the extent to which these beliefs may be being caused by signalling-related biases is quite appropriate.

Under this model, agents exaggerate the risks, and tell others about those risks, which makes them feel good. It also makes the recipients grateful. They then go on to infect others with the DOOM meme.

The infected agents construct elaborate rationales to explain the repeated historical failures of the DOOM predictions to come true. Yes, all those other folk who thought the same thing were wrong - but this time it is different, because ...

Comment author: Vladimir_Nesov 14 July 2009 11:56:52PM *  1 point [-]

Your argument is that it's plausible that the idea is propagating independently of its truth (which is obviously true, when the idea is construed at the level of crude approximation), but it's not an argument against the idea's truth, especially if the idea is recreated apart from its fame.

Also, your version of the idea is about fuzzies, while ours is about utility, prompting different kinds of actions. The empty buzz of doomsaying was around for a long time, never crossing over towards serious study.

Comment author: Annoyance 16 July 2009 05:47:27PM 0 points [-]

Self-perpetuation in the strictest sense isn't always the point. The goal isn't to simply impose the same structure onto the future over and over again. It's continuity between structures that's important.

Wanting to live a long life isn't the same as having oneself frozen so that the same physical configuration of the body will persist endlessly. The collapse of ecosystems over a hundred-million-year-long timespan is not a failure, no more than our changing our minds constitutes a failure of self-preservation.

Comment author: timtyler 14 July 2009 11:14:46PM 0 points [-]

Re: One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion.

Bostrom and Rees have both written books on the topic - and so presumably stand to gain financially out of promoting the idea that existential risk is something to be concerned about.

It could also be argued that we are probably seeing a sampling bias here - of all the people on the planet, those with the highest estimate of DOOM are those most likely to alert others to the danger. So: their estimates may well be from the very top end of the distribution.

Comment author: timtyler 14 July 2009 08:33:26PM 0 points [-]

In many of the hypothetical "disasters", civilisation doesn't end - it is just that it is no longer led by humans. That seems a practically inevitable long-term outcome to me (humans are rather obviously too primitive and slug-like to go the distance).

The classification of such outcomes as "disasters" needs a serious rethink, IMO.

Comment author: infotropism 14 July 2009 08:57:03PM 0 points [-]

hypothetical "disasters", civilisation doesn't end - it is just that it is no longer led by humans

You'd think that's actually pretty much what most of us humans care about.

Comment author: timtyler 14 July 2009 09:12:59PM -1 points [-]

Prepare for disappointment, then. My estimate of the chances of humans persisting for much longer is pretty tiny. Future civilisation is likely to be descended from current civilisation - but humans are much more likely to survive in museums than anywhere else. That outcome is not necessarily a disaster - it could be one of the best possible outcomes. Having humans in charge would be really, really bad for civilisation's health and spaceworthiness.

Comment author: infotropism 14 July 2009 09:39:23PM 0 points [-]

A fair point. So what you're telling me is that we should desire a future civilization that is descended from our own, probably one that will have some common points with current humanity, like, some of our values, desires (or values, desires who'd have grown from our own) etc. ?

Comment author: timtyler 14 July 2009 09:48:47PM *  -1 points [-]

It is not my wish to advise what people should or should not desire. However, there being no humans around does not necessarily a disaster make. Maybe the humans transcended their bodies, adopting a new, high-technology medium, which finally allows our brains to be copied and backed up. A disaster? Or the ancient dream of conquering death come true? That would seem to depend on your perspective.