I would be happy to get feedback on this article, originally posted by the IEET:

 

When people worry about the dark side of emerging technologies, most think of terrorists or lone psychopaths with a death wish for humanity. Some future Ted Kaczynski might acquire a masters degree in microbiology, purchase some laboratory equipment intended for biohackers, and synthesize a pathogen that spreads quickly, is incurable, and kills 90 percent of those it infects.

Alternatively, Benjamin Wittes and Gabriella Blum imagine a scenario in which a business competitor releases “a drone attack spider, purchased from a bankrupt military contractor, to take you out. … Upon spotting you with its sensors, before you have time to weigh your options, the spider—if it is, indeed, an attack spider—shoots an infinitesimally thin needle … containing a lethal dose of a synthetically produced poison.” Once this occurs, the spider exits the house and promptly self-destructs, leaving no trace behind it.

This is a rather terrifying picture of the future that, however fantastical it may sound, is not implausible given current techno-developmental trends. The fact is that emerging technologies like synthetic biology and nanotechnology are becoming exponentially more powerful as well as more accessible to small groups and even single individuals. At the extreme, we could be headed toward a world in which a large portion of society, or perhaps everyone, has access to a “doomsday button” that could annihilate our species if pressed.

This is an unsettling thought given that there are hundreds of thousands of terrorists—according to one estimate—and roughly 4 percent of the population are sociopaths—meaning that there are approximately 296 million sociopaths in our midst today. The danger posed by such agents could become existential in the foreseeable future.

But what if deranged nutcases with nefarious intentions aren’t the most significant threat to humanity? An issue that rarely comes up in such conversations is the potentially greater danger posed by well-intentioned people with access to advanced technologies. In his erudite and alarming book Our Final Hour, Sir Martin Rees distinguishes between two types of agent-related risks: terror and error. The difference between these has nothing to do with the consequences—a catastrophe caused by error could be no less devastating than one caused by terror. Rather, what matters are the intentions behind the finger that pushes a doomsday button, causing spaceship Earth to explode.

There are reasons for thinking that error could actually constitute a greater threat than terror. First, let’s assume that science and technology become democratized such that most people on the planet have access to a doomsday button of some sort. Let’s say that the global population at this time is 10 billion people.

Second, note that the number off individuals who could pose an error threat will vastly exceed the number of individuals who would pose a terror threat. (In other words, the former is a superset of the latter.) On the one hand, every terrorist hell-bent on destroying the world could end up pushing the doomsday button by accident. Perhaps while attempting to create a designer pathogen that kills everyone not vaccinated against it, a terrorist inadvertently creates a virus that escapes the laboratory and is 100 percent lethal. The result is a global pandemic that snuffs out the human species.

On the other hand, any good-intentioned hobbyist with a biohacking laboratory could also accidentally create a new kind of lethal germ. History reveals numerous leaks from highly regulated laboratories—the 2009 swine flu epidemic that killed 12,000 between 2009 and 2010 was likely caused by a laboratory mistake in the late 1970s—so it’s not implausible to imagine someone in a largely unregulated environment mistakenly releasing a pathogenic bug.

In a world where nearly everyone has access to a doomsday button, exactly how long could it last? We can, in fact, quantify the danger here. Let’s begin by imagining a world in which all 10 billion people have (for the sake of argument) a doomsday button on their smartphone. This button could be pushed at any moment if one opens up the Doomsday App. Further imagine that of the 10 billion people who live in this world, not a single one has any desire to destroy it. Everyone wants the world to continue and humanity to flourish.

Now, how likely is this world to survive the century if each individual has a tiny chance of pressing the button? Crunching a few numbers, it turns out that doom would be all-but-guaranteed if each person had a negligible 0.000001 percent chance of error. The reason is that even though the likelihood of any one person causing total annihilation on accident is incredibly small, this probability adds up across the population. With 10 billion people, one should expect an existential catastrophe even if everyone is very, very, very careful not to press the button.

Consider an alternative scenario: imagine a world of 10 billion morally good people in which only 500 have the Doomsday App on their smartphone. This constitutes a mere 0.000005 percent of the total population. Imagine further that each of these individuals has an incredibly small 1 percent chance of pushing the button each decade. How long should civilization as a whole, with its 10 billion denizens, expect to survive? Crunching a few numbers again reveals that the probability of annihilation in the next 10 years would be a whopping 99 percent—that is, more or less certain.

The staggering danger of this situation stems from the two trends mentioned above: the growing power and accessibility of technology. A world in which fanatics want to blow everything up would be extremely dangerous if “weapons of total destruction” were to become widespread. But even if future people are perfectly compassionate—perhaps because of moral bioenhancements or what Steven Pinker calls the “moral Flynn effect”—the fact of human fallibility will make survival for centuries or decades highly uncertain. As Rees puts this point:

If there were millions of independent fingers on the button of a Doomsday machine, then one person’s act of irrationality, or even one person’s error, could do us all in. … Disastrous accidents (for instance, the unintended creation or release of a noxious fast-spreading pathogen, or a devastating software error) are possible even in well-regulated institutions. As the threats become graver, and the possible perpetrators more numerous, disruption may become so pervasive that society corrodes and regresses. There is a longer-term risk even to humanity itself.

As scholars have noted, “an elementary consequence of probability theory [is] that even very improbable outcomes are very likely to happen, if we wait long enough.” The exact same goes for improbable events that could be caused by a sufficiently large number of individuals—not across time, but across space.

Could this situation be avoided? Maybe. For example, perhaps engineers could design future technologies with safety mechanisms that prevent accidents from causing widespread harm—although this may turn out to be more difficult than it seems. Or, as Ray Kurzweil suggests, we could build a high-tech nano-immune system to detect and destroy self-replicating nanobots released into the biosphere (a doomsday scenario known as “grey goo”).

Another possibility advocated by Ingmar Persson and Julian Savulescu entails making society just a little less “liberal” by trading personal privacy for global security. While many people may, at first glance, be resistant to this proposal—after all, privacy seems like a moral right of all humans—if the alternative is annihilation than this trade-off might be worth the sacrifice. Or perhaps we could adopt the notion of sousveillance, whereby citizens themselves monitor society the use of wearable cameras and other apparatuses. In other words, the surveillees (those being watched) could use advanced technologies to surveil the surveillers (those doing the watching)—a kind of “inverse panopticon” to protect people from the misuse and abuse of state power.

While terror gets the majority of attention from scholars and the media, we should all be thinking more about the existential dangers inherent in the society-wide distribution of offensive capabilities involving advanced technologies. There’s a frighteningly good chance that future civilization will be more susceptible to error than terror.

(Parts of this are excerpted from my forthcoming book Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks.)

New Comment
4 comments, sorted by Click to highlight new comments since:
[-]Elo20

0.000001

There are numbers smaller and less base 10 than that...

Other than that, excellent piece. Error is a concern but I am not convinced how much.

Sousveillance might be a rather generic solution, but the right way to do it would be surveillance technologies that dump to the internet (rather than those silly unilateral necklaces).

In the past my biggest worry was that this kind of systematic sousveillance (especially in court rooms, corporate board rooms, and science labs, etc) would turn our institutions into a crappy reality TV show, possibly with poisonous effects on our democratic decision making systems.

I'm beginning to think that we will probably get the downsides of a reality-tv-show-like political system anyway, which makes my biggest personal argument against systematic sousveillance somewhat weaker than before.

Our community worries more bioengineered pandemics than regular terrorism (at least the topic topped our X-risk surveys), so "we" might not be the best

[-][anonymous]00

I think this is a good point, and it does seem to be the case that oversalience of highly emotional topics can hijack our thinking about risks. Out of curiosity, I ran something similar to the Doomsday Button after reading Hofstadter's piece on nuclear war. The results are pretty staggering, and I like the probability theory mention.

Aside from that, I don't have any immediate criticisms. I think this is important to consider and may not be currently a thing that policy makers consider right now.