People trying to guard civilisation against catastrophe usually focus on one specific kind of catastrophe at a time. This can be useful for building concrete knowledge with some certainty in order for others to build on it. However, there are disadvantages to this catastrophe-specific approach:

1. Catastrophe researchers (including Anders Sandberg and Nick Bostrom) think that there are substantial risks from catastrophes that have not yet been anticipated. Resilience-boosting measures may mitigate risks that have not yet been investigated.

2. Thinking about resilience measures in general may suggest new mitigation ideas that were missed by the catastrophe-specific approach.

One analogy for this is that an intrusion (or hack) to a software system can arise from a combination of many minor security failures, each of which might appear innocuous in isolation. You can decrease the chance of an intrusion by adding extra security measures, even without a specific idea of what kind of hacking would be performed. Things like being being able to power down and reboot a system, storing a backup and being able to run it in a "safe" offline mode are all standard resilience measures for software systems. These measures aren't necessarily the first thing that would come to mind if you were trying to model a specific risk like a password getting stolen, or a hacker subverting administrative privileges, although they would be very useful in those cases. So mitigating risk doesn't necessarily require a precise idea of the risk to be mitigated. Sometimes it can be done instead by thinking about the principles required for proper operation of a system - in the case of its software, preservation of its clean code - and the avenues through which it is vulnerable - such as the internet.

So what would be good robustness measures for human civilisation? I have a bunch of proposals:

 

Disaster forecasting

Disaster research

* Build research labs to survey and study catastrophic risks (like the Future of Humanity Institute, the Open Philanthropy Project and others)

Disaster prediction

* Prediction contests (like IARPA's Aggregative Contingent Estimation "ACE" program)

* Expert aggregation and elicitation

 

Disaster prevention

General prevention measures

* Build a culture of prudence in groups that run risky scientific experiments

* Lobby for these mitigation measures

* Improving the foresight and clear-thinking of policymakers and other relevant decision-makers

* Build research labs to plan more risk-mitigation measures (including the Centre for Study of Existential Risk)

Preventing intentional violence

* Improve focused surveillance of people who might commit large-scale terrorism (this is controversial because excessive surveillance itself poses some risk)

* Improve cooperation between nations and large institutions

Preventing catastrophic errors

* Legislating for individuals to be held more accountable for large-scale catastrophic errors that they may make (including by requiring insurance premiums for any risky activities)

 

Disaster response

* Improve political systems to respond to new risks

* Improved vaccine development, quarantine and other pandemic response measures

* Building systems for disaster notification


Disaster recovery

Shelters

* Build underground bomb shelters

* Provide a sheltered place for people to live with air and water

* Provide (or store) food and farming technologies (cf Dave Denkenberger's *Feeding Everyone No Matter What*

* Store energy and energy-generators

* Store reproductive technologies (which could include IVF, artificial wombs or measures for increasing genetic diversity)

* Store information about building the above

* Store information about building a stable political system, and about mitigating future catastrophes

* Store other useful information about science and technology (e.g. reading and writing)

* Store some of the above in submarines

* (maybe) store biodiversity

 

Space Travel

* Grow (or replicate) the international space station

* Improve humanity's capacity to travel to the Moon and Mars

* Build sustainable settlements on the Moon and Mars

 

Of course, some caveats are in order. 

To begin with, one could argue that surveilling terrorists is a measure specifically designed to reduce the risk from terrorism. But there are a number of different scenarios and methods through which a malicious actor could try to inflict major damage on civilisation, and so I still regard this as a general robustness measure, granted that there is some subjectivity to all of this. If you know absolutely nothing about the risks that you might face, and the structures in society that are to be preserved, then the exercise is futile. So some of the measures on this list will mitigate a smaller subset of risks than others, and that's just how it is, though I think the list is pretty different from the one people think of by using a risk-specific paradigm, which is the reason for the exercise.

Additionally, I'll disclaim that some of these measures are already well invested, and yet others will not be able to be done cheaply or effectively. But many seem to me to be worth thinking more about.

Additional suggestions for this list are welcome in the comments, as are proposals for their implementation.

 

Related readings

https://www.academia.edu/7266845/Existential_Risks_Exploring_a_Robust_Risk_Reduction_Strategy

http://www.nickbostrom.com/existential/risks.pdf

http://users.physics.harvard.edu/~wilson/pmpmta/Mahoney_extinction.pdf

http://gcrinstitute.org/aftermath

http://sethbaum.com/ac/2015_Food.html

http://the-knowledge.org

http://lesswrong.com/lw/ma8/roadmap_plan_of_action_to_prevent_human/

New to LessWrong?

New Comment
43 comments, sorted by Click to highlight new comments since: Today at 3:20 PM

This is an intended as a provocation to think outside your box. I hope you take it in the spirit intended.

If you are really brainstorming around the risk of a collapse of civilization due to some catastrophe, it is really hard to think outside your own political preferences. I say this from experience because I shy away from certain solutions (and even from acknowledging the problem). So allow me to suggest that your own limitations are making you avoid what I'd call ugly choices.

You suggest international cooperation as a way to prevent widespread destruction. Well, maybe. But there are two to four countries that have developed or are developing nuclear weapons and missile systems and that the rest of the world seems to treat as unstable. So one solution to that problem is invade them, destroy their facilities for nuclear and missile research, remove their leadership, and remove their relevant scientists and technical personnel. Why neither that problem nor that solution on your list? There is at least one recent example of a country invading another country after taking a public position that the second country had weapons of mass destruction. Was your omission because of that experience?

Many of your proposals seem oriented towards saving as many people as possible, rather than saving civilization. If civilization falls, the resulting economy will probably not produce enough food quickly enough to feed everyone. (Me? I'll starve in year 1 , if I survive that long.) Why propose to spend resources on things that do not actually improve civilization's robustness (like widely distributed gas masks when their recipients starve in the following winter)?

Economic growth creates more resources that can be used for resilience. Our current laws reduce both the maximum potential growth rate and the growth rate we actually have. For example, abolishing all labor law would vastly increase the size of the economy. Why does your list not embrace whatever political policies induce the fastest economic growth?

Relatedly, one major civilization that fell due to its own laws was probably the Roman empire. It choked off its own economic growth through regulations intended to support its taxation structure, until it could not sustain its own weight. Why does your list not include that kind of threat to civilization?

I think there is commonality in these items, but that might be in the eye of the beholder.

abolishing all labor law would vastly increase the size of the economy

[citation needed], as the saying goes.

Why does your list not embrace whatever political policies induce the fastest economic growth?

I agree that the list should include something like "Pursuing rapid economic growth". But (1) it would probably be a mistake for the list to pick specific economic policies on the basis that they produce the fastest economic growth, since then the discussion would be in danger of being politicized by, say, an advocate of some particular economic/political policy that happens to differ from the one assumed in the list. (Of course that would never happen if the OP declined to pick favourites in this fashion.) And (2) fastest economic growth should not be the only criterion, unless that really is the only thing that influences robustness, which it may well not be. E.g., a policy might produce faster growth but also greater danger of violent and destructive revolution. Or it might produce faster growth but also introduce more single points of failure where one asteroid strike or serious outbreak of illness or terrorist act could bring everything down.

To take an example you already gave: laws restricting how unpleasant employers can make their employees' lives may reduce economic growth but also make it less likely that there's a violent uprising by workers fed up of their unpleasant lives.

one major civilization that feel due to its own laws [...] Why does your list not include that kind of threat to civilization?

As I understand it, RyanCarey is interested in threats to human civilization as a whole rather than to individual human civilizations. Human civilization as a whole doesn't have laws, regulations, taxation, etc. If one nation collapses under the weight of its own regulatory burden then others will presumably take note.

(How widely held, and how well supported, is the theory that the Roman empire failed because of overregulation and overtaxation? It's not a claim I've heard before, but I am about as far from being an expert in late Roman history as it is possible to be. In particular, how widely accepted is this theory outside circles in which everything is blamed on overregulation and overtaxation?)

[citation needed]

Here is a start.

Interesting; thanks. Of course it would be surprising if a document from the Mercatus Center didn't conclude that regulation is economically disastrous, even in a hypothetical world where regulation is purely beneficial :-). (And their analysis is complicated enough that "how much do I trust the authors?" does seem like as important a question as "does it seem like they're doing the right thing?", when trying to figure out whether their results are likely to have much to do with the real world.)

abolishing all labor law would vastly increase the size of the economy

[citation needed], as the saying goes.

I kind of doubt it. There are virtually no serious non-Marxist economists who believe that artificially raising the cost of labor, capital, or any other economic input diminishes output. The real debate is over whether it is appropriate to do so for other reasons, like fairness, justice, equality, and so on. So, if you really need a citation, I'd say that any first-year economics textbook would do it.

it would probably be a mistake for the list to pick specific economic policies on the basis that they produce the fastest economic growth, since then the discussion would be in danger of being politicized

My main point was about the mental process that generated the list. It a constraint on the process that generates the list is that it must be 100% de-politicized, then I wouldn't put much faith in the list. And it reads to me like it has been.

fastest economic growth should not be the only criterion, unless that really is the only thing that influences robustness

Sure, but that main point again was about the process that led to the list. There's a cost to everything on the list. Just because taxes pay for some item on it doesn't make it free, even in terms of robustness.

As I understand it, RyanCarey is interested in threats to human civilization as a whole rather than to individual human civilizations. Human civilization as a whole doesn't have laws, regulations, taxation, etc. If one nation collapses under the weight of its own regulatory burden then others will presumably take note.

One would hope, but they seem to be moving together for the past 20 years or so toward greater regulation, and hence greater fragility. If you can point me to a country whose published laws and regulations are shorter now than they were 10 years ago, I will happily retract the point (and consider buying a second home there).

And I would say that there are some legal jurisdictions that, if they failed quickly enough, could bring down the entirety of civilization. The US and EU are the two that come to mind. Two EMP devices, or two large enough asteroids, might do it.

How widely held, and how well supported, is the theory that the Roman empire failed because of overregulation and overtaxation?

It was the orthodox explanation in my economic history class that I took in 1988. I sometimes return to the subject to see if anyone has overturned that theory, and have never seen anything along those lines. The regulation was mainly driven by taxation,. The state raised revenues to the point that avoidance was really problematic, then instituted heavy controls on individuals to ensure payment. For example, in the late empire, most taxes were levied in kind; the regulatory response was that the law positively required the eldest son to succeed to his father's profession, property, and station -- so that the authorities could ensure that they were getting the right in-kind taxation. Some Roman citizens abandoned their property in order to avoid taxation, like runaway slaves. There is a theory that this is where serfdom originated, though I suspect the reality was more culturally mixed. The regulation had vast cost beyond the taxes being paid, because it prevented the movement of resources to more productive uses, either by changing jobs or by moving locations.

In any case, the point is that the regulatory structure created civilizational fragility. It didn't take much after that for Rome to fall. I mean seriously -- barbarian invaders? Rome had dealt with that for a thousand years and had always recovered from any reverses. It's like the signature of the Roman Republic that they lost battles, won wars, and came back stronger than before. But the empire became a different thing.

any first-year economics textbook

You didn't say "the first-order effect of abolishing labor laws would almost certainly be some increase in economic output", you said "abolishing all labor law would vastly increase the size of the economy". There are two differences here.

  • First-order effect according to Economics 101 versus overall effect in the complicated real world.
  • Some unspecified, perhaps small, increase versus "vastly increase".

Either would suffice to make your proposal to settle the issue by consulting "any first-year economics textbook" worthless. Would you like to make a more serious response?

I[f] a constraint on the process that generates the list is that it must be 100% de-politicized, then I wouldn't put much faith in the list.

Noted. Personally it wouldn't bother me at all, unless the list purported to be a complete list of everything that could or should be done. If completeness is the goal, then I agree that even politically incendiary proposals might belong in the list -- but they should be flagged as such, to avoid unnecessary political firefights.

Just because taxes pay for some item on it doesn't make it free

So far as I can tell, no one has said or implied anything even slightly resembling "these items are free because they are paid for by taxes". You need to stop rounding everything off to the nearest thing there's a libertarian talking point about.

(Lest I be misunderstood, I should add that I hope I would say the same thing if there were someone doing the same thing from the other side and rounding everything off to the nearest thing there's, say, a Marxist talking point about. It happens that no one here is doing that.)

greater regulation, and hence greater fragility

[citation needed], again. It seems obvious to me that some regulation has the intention of reducing fragility, and I don't see any grounds for assuming that it never succeeds. (Building codes make earthquakes etc. less disastrous. Labour laws make violent revolution less likely. Environmental legislation makes "natural" disasters less likely. Regulation of hazardous materials makes it less likely that we all get made stupid by lead poisoning -- you may recall that that's been blamed for the collapse of the Roman empire too. Do all these actually work? I dunno. Do you know that they don't?)

Two EMP devices, or two large enough asteroids

One large enough asteroid could put an end to life on earth, so I can't disagree there. But if you really think two EMP devices could bring down the world's civilization then I think you're out of your mind.

the orthodox explanation in my economic history class

Noted. The picture I get from the bit of reading I've just done doesn't quite match yours. It suggests that a more fundamental problem was that late Roman taxation was dominated by a land-tax which hit peasant farmers hardest, and which incentivized them to abandon land, so that productivity plummeted. That doesn't seem like overregulation exactly; it seems like a damagingly regressive and ill-designed tax system. (The total tax burden even in the late empire seems to have been somewhat less as a fraction of gross product than in, say, the present-day US.)

Also apparently commonly blamed for late Roman economic problems: concentration of wealth in the hands of the wealthiest of the senatorial class, and large transfers of wealth to the Christian church.

Nothing I've read -- admittedly, this is just wandering through what I can find online -- says anything about overregulation being the (or even a) source of the late empire's problems. That doesn't mean it's wrong, but I'm finding it hard to believe that it's "the orthodox explanation" in any context much wider than your economic history class :-).

The real debate is over whether it is appropriate to do so for other reasons, like fairness, justice, equality, and so on.

As gjm pointed out, one of those "other reasons" is survival. A mob with torches and pitchforks can easily put an end to your fine economic experiment of maximising growth.

How widely held, and how well supported, is the theory that the Roman empire failed because of overregulation and overtaxation? It's not a claim I've heard before, but I am about as far from being an expert in late Roman history as it is possible to be. In particular, how widely accepted is this theory outside circles in which everything is blamed on overregulation and overtaxation?

Overtaxation is a standard reason given for the fall of the Roman Empire, and I'm surprised you haven't heard of that before. I've never heard of overregulation being a reason; I've never looked into the Roman regulatory state, and have no idea how burdensome it was, or even if it substantively existed.

Here is a thread on the "Recovery Manual for Civilization," which I thought is a useful addition to your list: http://lesswrong.com/lw/l6r/manual_for_civilization/

And here was (most of) my comment in that thread:

My first conclusion was that there are all kinds of events that could lead to a collapse of civilization without exterminating humanity directly. But it may be impossible for humanity to rise back from the ashes if it stays there too long. Humanity can't take the same path it took to get to where it is now. For example, humanity developed different forms of energy as prices of previous forms rose. For example, we started digging up shallow coal when population grew too high to use charcoal produced from wood. But many of those resources are no longer near the surface. All coal, oil, and gas near the surface has been extracted. So humanity, if it rose again, would have to find a different path.

My conclusion was that I would not want to invest enough time into preparing for the end of the world to get personally involved in preparing for the end of civilization. But some people already do: all the survivalists and similar people. So, if I want to help civilization recover after a collapse, my best alternative is not to try to store the information myself. Instead, it is to reduce the information to a useful form and give it to these people. This is far more robust that trying to think of the best way of having the information survive. Give it to a few thousand people in scattered places with different survival strategies, and it is much more likely that the information survives.

These two facts together led me to conclude that a collaboration to prepare some kind of big, easily read manual of technology would be a valuable contribution to the survival of humanity. It would need to build on itself, somewhat like tech trees in strategy games.

Plus, it would be really amusing for the LW community to decide that one of its bets for saving humanity is to help outfit survivalists with technical know-how.

I'd add that a case that fits within the general idea of a disaster that destroys civilization, but does not extinguish humanity would be a few (possibly as few as one or two) electro-magnetic pulse detonations over the eastern or western seaboard of the United States. I can see the follow-on-effects bringing civilization down. I would think that would get worse in the next 50 years or so, as India and China catch up to the level of computerization prevalent in the US.

I created a somewhat similar plan of x-risks prevention, which is here: http://immortality-roadmap.com/globriskeng.pdf But in it rising robustness is only part of whole plan, and do not include many of ideas from your plan, which are present in other parts of the map.

In my plan (Plan A3 in the map) robustness consist of several steps:

Step One. Improving sustainability of civilization • Intrinsically safe critical systems • Growing diversity of human beings and habitats • Universal methods of catastrophe prevention (resistant structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including:

  • temporary shelters,
  • air and water cleaning systems,
  • radiation meters, gas masks,
  • medical kits
  • mass education

Step Two. Useful ideas to limit the scale of a catastrophe • Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe:

  • technical instruments for implementing quarantine,
  • improve the capacity for rapid production of vaccines in response to emerging threats
  • grow stockpiles of important vaccines • Increase preparation time by improving monitoring and early detection technologies:
  • support general research on the magnitude of biosecurity risks and opportunities to reduce them
  • improve and connect disease surveillance systems so that novel threats can be detected and responded to more quickly • Worldwide x-risk prevention exercises • The ability to quickly adapt to new risks and envision them in advance

Step Three. High-speed Tech Development needed to quickly pass risk window • Investment in super-technologies (nanotech, biotech, Friendly AI) • High speed technical progress helps to overcome slow process of resource depletion • Invest more in defensive technologies than in offensive

Step Four. Timely achievement of immortality on highest possible level • Nanotech-based immortal body • Diversification of humanity into several successor species capable of living in space • Mind uploading • Integration with AI

A lot of strong suggestions there - I've added subs for example.

Re how to plot a course of action for mitigating these risks, I guess GCRI is doing a lot of the theoretical work on robustness, and they could be augmented by more political lobbying and startup projects?

Terrorists are a rounding error. Sure, some day they'll take out a city with a nuke but in history cities have been wiped out many many times without taking their parent civilization with them.

Historically speaking, I agree, yet it's conceivable that a malicious actor might militarise some powerful technology, and classing its use as an extreme act of terrorism sounds about right.

I can sort of imagine a world where some extremely well funded terrorists engineer/manufacture a few dozen really nasty diseases and release them in hundreds/thousands of locations at once, (though most terrorists wouldn't because such an attack would hurt their own side as much or more than anyone else) that might seriously hurt society as a whole but most of the time the backlash against terrorism seems more dangerous than the actual terrorists.

Consider terrorists who release viruses that target or spare specific populations. If gene editing technologies make offence much easier than defense, and allows lone individuals to make these viruses such terrorism could be world-ending.

I would put backlash to terrorism as part of the risk of terrorism

It seems to be very unclear what effect the Edict on Maximum Prices actually had -- it looks as if the inflation caused by repeated currency debasement was the primary problem.

  • Legislating for individuals to be held more accountable for large-scale catastrophic errors that they may make (including by requiring insurance premiums for any risky activities)

If I blow up the planet, neither my insurance nor your lawsuit is going to help anything. Which is to say, this proposal is just a wealth transfer to insurance companies, since they never have to pay out.

If you're running a synthetic biology company, and have to be insured against major pandemics, you may need more risk reduction measures to stay profitable, reducing existential risk, precisely because many pandemics can bring on costs without causing extinction.

I agree strongly with the general direction of your thinking.

I think one of the best strategies for robustification is diversity in many dimensions: biological, intellectual, political, social, technological. Nature has used this strategy for millions of years (the next time you walk around your town notice how many different tree species there are).

  • Improve focused surveillance of people who might commit large-scale terrorism (this is controversial because excessive surveillance itself poses some risk)

Who do you mean with "people who might commit large-scale terrorism"? Militaries of nation states?

The prototypical case would be a rogue individual but military and other security institutions also pose danger that we ought to think about targeting.

Why should we worry more about rogue individuals than about the institutions itself? Most of the people killed by those organisations aren't killed by rogue operatives.

(cross-posted on EA Forum)

Good post! I'd suggest adding making more people aware of the problem of existential risks, and the need to do something about these risks. So maybe outreach about these topics should be one of the things to be done.

That looks like a very strange list which ranges from utopian suggestions ("improve political systems") to something we already tried ("build underground bomb shelters") to something we don't know how to do ("store information about building a stable political system") to just silly things ("legislating for individuals to be held more accountable for large-scale catastrophic errors").

In my defense:

  1. Utopian political changes like futarchy, seasteading and world-government are often seriously proposed as GCR reducers. They ought to be listed if only to be ruled out.
  2. Well one's library at least ought to include some prescient historical case studies of political collapses and revolutions, and studies of primitive tribes.
  3. Requiring people to insure dual use synbio labs against widespread loss of life is also a serious policy proposal.

I grant that it's an eclectic and unprioritised list, but that's an assessment of the field (which is only in its earliest stages) moreso than an indictment of my characterisation of it, right?

  • Grow (or replicating) the international space station

Why?

To improve the likelihood that people can come back to Earth from space after a disaster. Presumably there would be some targeted ways to do this.

I have a heard time imaging a scenario where an ISS style space station would allow disaster recovery.

What about some other kind of station?

An underwater city can be both self-sustaining and very well isolated from whatever ravages the surface of the Earth. Much cheaper and easier to build than an equivalently large space station, too.

I don't see any space station being self sustaining.

Mars could work and maybe the moon but a simple space station likely isn't worth the investment.

I think stations can be self-sustaining, but they have to be much, much larger than the ISS.

But the bigger issue is, what functions would you even want in LEO that would help? I guess a beanstalk top would be really helpful, but it's hard to see anything that wipes out Earth being unable to take down the beanstalk too, unless it was a plague and the stalk had very impressive passive safety features.

Having other satellites, like GPS, and surveys, and so forth, could be really helpful, but that's not a space station.

It would make a good rendezvous point so you can have shuttles and ships, and the ships don't need to hang out all the time. It would make things cheaper and faster, though not make something possible that otherwise wouldn't be.

I guess a facility for checking out and repairing atmospheric entry vehicles would be very handy if there's any concern about that.

Why would we colonize another gravity well? This one is already 90% of our problem with colonizing space.

Because you can use resources from Mars once you are there. Mars has the potential to carry a human civilisation. It has the potential to be terraformed.

Mars has the potential to carry the sort of civilization we have now; it's another planet, we make it like Earth, we get another Earth, we colonize it and live like we do on Earth.

Space stations have the capacity to carry an entirely new sort of civilization. The resources are out there, too - more scattered, yes, but your processing plant and drilling equipment are far more mobile in space. More, once you have industry running, gravity wells are a substantively smaller problem.

Mars has the potential to carry the sort of civilization we have now; it's another planet, we make it like Earth, we get another Earth, we colonize it and live like we do on Earth.

Mars will never by just like earth. Different gravity matters. Culturally the process of building up Mars likely won't produce a culture that matches earth.

Earth's patent law likely won't be enforcable on Mars. Genetic engineering might be legal on a much wider scale than earth.

I concur. The only point to a putting permanent space stations into orbit is if it helps us along the path to putting humans some place that they can live for years after something really bad happens to Earth. That means a full, independent ecosystem that produces sufficient resources and new people to colonize Earth.

... "Colonize Earth" -- what a strange pair of sentences to write.

Yes. SpaceX does profit from the ISS existing. But that's expensive. You could also find other missions for SpaceX.

Suppose some kind of shock makes Earth briefly uninhabitable, and things don't work out with people who immediately emerge from bunkers and submarines, but 100 people can come down again from space shortly afterwards and recolonise it.

What kind of shock do you have in mind?

I don't have a specific one in mind but nuclear winter, or a catastrophic problem spreading through the atmosphere, or something bioengineered would be concievable.

It seems unlikely to me that those would kill all of the earthbound population while still leaving the earth habitable enough for our returning astronauts. (If they kill, say, 99.99% of the population then the survivors will far outnumber the astronauts.)