This is a "basics" article, intended for introducing people to the concept of existential risk.
On September 26, 1983, Soviet officer Stanislav Petrov saved the world.
Three weeks earlier, Soviet interceptors had shot down a commercial jet, thinking it was on a spy mission. All 269 passengers were killed, including active U.S. senator Lawrence McDonald. President Reagan called the Soviet Union an “evil empire" in response. It was one of the most intense periods of the Cold War.
Just after midnight on September 26, Petrov sat in a secret bunker, monitoring early warning systems. He did this only twice a month, and it wasn’t his usual shift; he was filling in for the shift crew leader.
One after another, five missiles from the USA appeared on the screen. A siren wailed, and the words "ракетном нападении" ("Missile Attack") appeared in red letters. Petrov checked with his crew, who reported that all systems were operating properly. The missiles would reach their targets in Russia in mere minutes.
Protocol dictated that he press the flashing red button before him to inform his superiors of the attack so they could decide whether to launch a nuclear counterattack. More than 100 crew members stood in silence behind him, awaiting his decision.
"I thought for about a minute," Petrov recalled. "I thought I’d go crazy... It was as if I was sitting on a bed of hot coals."
Petrov broke protocol and went with his gut. He refused to believe what the early warning system was telling him.
His gut was right. Russian satellites had misinterpreted shiny reflections on the Earth’s surface as missile launches. Russia was not under attack.
If Petrov had pressed the red button, and his superiors had launched a counterattack, the USA would have detected the incoming Russian missiles and launched their own missiles before they could be destroyed in the ground. Soviet and American missiles would have passed in the night sky over the still, silent Arctic before detonating over hundreds of targets — each detonation more destructive than all the bombs dropped in World War II combined, including the atomic bombs that vaporized Hiroshima and Nagasaki. Most of the Northern Hemisphere would have been destroyed.
Petrov was reprimanded and offered early retirement. To pay his bills, he took jobs as a taxi driver and a security guard. The biggest award he ever received for saving the world was a "World Citizen Award" and $1000 from a small organization based in San Francisco. He spent half the award on a new vacuum cleaner.
During his talk at Singularity Summit 2011 in New York City, hacker Jaan Tallinn drew an important lesson from the story of Stanislav Petrov:
Contrary to our intuition that society is more powerful than any individual or group, it was not society that wrote history on that day... It was Petrov.
...Our future is increasingly determined by individuals and small groups wielding powerful technologies. And society is quite incompetent when it comes to predicting and handling the consequences.
Tallinn knows a thing or two about powerful technologies making global impact. Kazaa, the file-sharing program he co-developed, was once responsible for half of all Internet traffic. He went on to develop the internet calling program Skype, which in 2010 accounted for 13% of all international calls.
Where could he go from there? After reading dozens of articles about the cognitive science of rationality, Tallinn realized:
In order to maximize your impact in the world, you should behave as a prudent investor. You should look for underappreciated [concerns] with huge potential.
Tallinn found the biggest pool of underappreciated concerns in the domain of “existential risks": things that might go horribly wrong and wipe out our entire species, like nuclear war.
The documentary Countdown to Zero shows how serious the nuclear threat is. At least 8 nations have their own nuclear weapons, and the USA has given nuclear weapons to 5 others. There are enough nuclear weapons around to destroy the world several times over, and the risk of a mistake remains even after the cold war. In 1995, Russian president Boris Yeltsin had the “nuclear suitcase" — capable of launching a barrage of nuclear missiles — open in front of him. Russian radar had mistaken a weather rocket for a US submarine-launched ballistic missile. Like Petrov before him, Yeltsin disbelieved his equipment and refused to press the red button. Next time we might not be so lucky.
But it's not just nuclear risks we have to worry about. As Sun Microsystems’ co-founder Bill Joy warned in his much-discussed article Why the Future Doesn’t Need Us, emerging technologies like synthetic biology, nanotechnology, and artificial intelligence may quickly become even more powerful than nuclear bombs, and even greater threats to the human species. Perhaps the International Union for Conservation of Nature will need to reclassify Homo sapiens as an endangered species.
Academics are beginning to accept that humanity lives on a knife’s edge. The famous physicists Martin Rees and John Leslie have written books about existential risk, titled Our Final Hour: A Scientist’s Warning and The End of the World: The Science and Ethics of Human Extinction. In 2008, Oxford University Press published Global Catastrophic Risks, inviting experts to summarize what we know about a variety of existential risks. New research institutes have been formed to investigate the subject, including the Singularity Institute in San Francisco and the Future of Humanity Institute at Oxford University.
Governments, too, are taking notice. In the USA, NASA was given a congressional mandate to catalogue all near-earth objects that are one kilometer or more in diameter, because an impact with such a large object would be catastrophic. President Bush established the National Nanotechnology Initiative to ensure the safe development of molecule-sized materials and machines. (Precisely self-replicating molecular machines could multiply themselves out of control, consuming resources required for human survival.) Many nations are working to reduce nuclear armaments, which pose the risk of human extinction by global nuclear war.
The public, however, remains mostly unaware of the risks. Existential risk is an unpleasant and scary topic, and may sound too distant or complicated to discuss in the mainstream media. For now, discussion of existential risk remains largely constrained to academia and a few government agencies.
The concern for existential risks may appeal to one other group: analytically-minded "social entrepreneurs" who want to have a positive impact on the world, and are accustomed to making decisions based on calculation. Tallinn fits this description, as does Paypal co-founder Peter Thiel. These two are among the largest donors to Singularity Institute, an organization focused on the reduction of existential risks from artificial intelligence.
What is it about the topic of existential risk that appeals to people who act by calculation? The analytic case for doing good by reducing existential risk was laid out decades ago by moral philosopher Derek Parfit:
The Earth will remain inhabitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history.
...Classical Utilitarians... would claim... that the destruction of mankind would be by far the greatest of all conceivable crimes. The badness of this crime would lie in the vast reduction of the possible sum of happiness...
For [others] what matters are... the Sciences, the Arts, and moral progress... The destruction of mankind would prevent further achievements of these three kinds.
Our technology gives us great power. If we can avoid using this power to destroy ourselves, then we can use it to spread throughout the galaxy and create structures and experiences of value on an unprecedented scale.
Reducing existential risk — that is, carefully and thoughtfully preparing to not kill ourselves — may be the greatest moral imperative we have.
Also, I'd say both of those pictures seem to have the effect of inducing far mode.