Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A model of AI development

18 lukeprog 28 November 2013 01:48PM

FHI has released a new tech report:

Armstrong, Bostrom, and Shulman. Racing to the Precipice: a Model of Artificial Intelligence Development.

Abstract:

This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivized to finish first — by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and about their own), the more the danger increases.

The paper is short and readable; discuss it here!

But my main reason for posting is to ask this question: What is the most similar work that you know of? I'd expect people to do this kind of thing for modeling nuclear security risks, and maybe other things, but I don't happen to know of other analyses like this.

Gelman Against Parsimony

5 lukeprog 24 November 2013 03:23PM

In two posts, Bayesian stats guru Andrew Gelman argues against parsimony, though it seems to be favored 'round these parts, in particular Solomonoff Induction and BIC as imperfect formalizations of Occam's Razor.

Gelman says:

I’ve never seen any good general justification for parsimony...

Maybe it’s because I work in social science, but my feeling is: if you can approximate reality with just a few parameters, fine. If you can use more parameters to fold in more information, that’s even better.

In practice, I often use simple models–because they are less effort to fit and, especially, to understand. But I don’t kid myself that they’re better than more complicated efforts!

My favorite quote on this comes from Radford Neal‘s book, Bayesian Learning for Neural Networks, pp. 103-104: "Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well."

...

...ideas like minimum-description-length, parsimony, and Akaike’s information criterion, are particularly relevant when models are estimated using least squares, maximum likelihood, or some other similar optimization method.

When using hierarchical models, we can avoid overfitting and get good descriptions without using parsimony–the idea is that the many parameters of the model are themselves modeled. See here for some discussion of Radford Neal’s ideas in favor of complex models, and see here for an example from my own applied research.

From Philosophy to Math to Engineering

16 lukeprog 04 November 2013 03:43PM

Cross-posted from the MIRI blog.

For centuries, philosophers wondered how we could learn what causes what. Some argued it was impossible, or possible only via experiment. Others kept hacking away at the problem, clarifying ideas like counterfactual and probability and correlation by making them more precise and coherent.

Then, in the 1990s, a breakthrough: Judea Pearl and others showed that, in principle, we can sometimes infer causal relations from data even without experiment, via the mathematical machinery of probabilistic graphical models.

Next, engineers used this mathematical insight to write software that can, in seconds, infer causal relations from a data set of observations.

Across the centuries, researchers had toiled away, pushing our understanding of causality from philosophy to math to engineering.

From Philosophy to Math to Engineering (small)

And so it is with Friendly AI research. Current progress on each sub-problem of Friendly AI lies somewhere on a spectrum from philosophy to math to engineering.

We began with some fuzzy philosophical ideas of what we want from a Friendly AI (FAI). We want it to be benevolent and powerful enough to eliminate suffering, protect us from natural catastrophes, help us explore the universe, and otherwise make life awesome. We want FAI to allow for moral progress, rather than immediately reshape the galaxy according to whatever our current values happen to be. We want FAI to remain beneficent even as it rewrites its core algorithms to become smarter and smarter. And so on.

Small pieces of this philosophical puzzle have been broken off and turned into math, e.g. Pearlian causal analysis and Solomonoff induction. Pearl's math has since been used to produce causal inference software that can be run on today's computers, whereas engineers have thus far succeeded in implementing (tractable approximations of) Solomonoff induction only for very limited applications.

Toy versions of two pieces of the "stable self-modification" problem were transformed into math problems in de Blanc (2011) and Yudkowsky & Herreshoff (2013), though this was done to enable further insight via formal analysis, not to assert that these small pieces of the philosophical problem had been solved to the level of math.

Thanks to Patrick LaVictoire and other MIRI workshop participants,1 Douglas Hofstadter's FAI-relevant philosophical idea of "superrationality" seems to have been, for the most part, successfully transformed into math, and a bit of the engineering work has also been done.

I say "seems" because, while humans are fairly skilled at turning math into feats of practical engineering, we seem to be much less skilled at turning philosophy into math, without leaving anything out. For example, some very sophisticated thinkers have claimed that "Solomonoff induction solves the problem of inductive inference," or that "Solomonoff has successfully invented a perfect theory of induction." And indeed, it certainly seems like a truly universal induction procedure. However, it turns out that Solomonoff induction doesn't fully solve the problem of inductive inference, for relatively subtle reasons.2

Unfortunately, philosophical mistakes like this could be fatal when humanity builds the first self-improving AGI (Yudkowsky 2008).3 FAI-relevant philosophical work is, as Nick Bostrom says, "philosophy with a deadline."

 

 

1 And before them, Moshe Tennenholtz.

2 Yudkowsky plans to write more about how to improve on Solomonoff induction, later.

3 This is a specific instance of a problem Peter Ludlow described like this: "the technological curve is pulling away from the philosophy curve very rapidly and is about to leave it completely behind."

The Inefficiency of Theoretical Discovery

19 lukeprog 03 November 2013 09:26PM

Previously: Why Neglect Big Topics.

Why was there no serious philosophical discussion of normative uncertainty until 1989, given that all the necessary ideas and tools were present at the time of Jeremy Bentham?

Why did no professional philosopher analyze I.J. Good’s important “intelligence explosion” thesis (from 19591) until 2010?

Why was reflectively consistent probabilistic metamathematics not described until 2013, given that the ideas it builds on go back at least to the 1940s?

Why did it take until 2003 for professional philosophers to begin updating causal decision theory for the age of causal Bayes nets, and until 2013 to formulate a reliabilist metatheory of rationality?

By analogy to financial market efficiency, I like to say that “theoretical discovery is fairly inefficient.” That is: there are often large, unnecessary delays in theoretical discovery.

This shouldn’t surprise us. For one thing, there aren’t necessarily large personal rewards for making theoretical progress. But it does mean that those who do care about certain kinds of theoretical progress shouldn’t necessarily think that progress will be hard. There is often low-hanging fruit to be plucked by investigators who know where to look.

Where should we look for low-hanging fruit? I’d guess that theoretical progress may be relatively easy where:

  1. Progress has no obvious, immediately profitable applications.
  2. Relatively few quality-adjusted researcher hours have been devoted to the problem.
  3. New tools or theoretical advances open up promising new angles of attack.
  4. Progress is only valuable to those with unusual views.

These guesses make sense of the abundant low-hanging fruit in much of MIRI’s theoretical research, with the glaring exception of decision theory. Our September decision theory workshop revealed plenty of low-hanging fruit, but why should that be? Decision theory is widely applied in multi-agent systems, and in philosophy it’s clear that visible progress in decision theory is one way to “make a name” for oneself and advance one’s career. Tons of quality-adjusted researcher hours have been devoted to the problem. Yes, new theoretical advances (e.g. causal Bayes nets and program equilibrium) open up promising new angles of attack, but they don’t seem necessary to much of the low-hanging fruit discovered thus far. And progress in decision theory is definitely not valuable only to those with unusual views. What gives?

Anyway, three questions:

  1. Do you agree about the relative inefficiency of theoretical discovery?
  2. What are some other signs of likely low-hanging fruit for theoretical progress?
  3. What’s up with decision theory having so much low-hanging fruit?

1 Good (1959) is the earliest statement of the intelligence explosion: “Once a machine is designed that is good enough… it can be put to work designing an even better machine. At this point an ”explosion“ will clearly occur; all the problems of science and technology will be handed over to machines and it will no longer be necessary for people to work. Whether this will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.” The term itself, “intelligence explosion,” originates with Good (1965). Technically, artist and philosopher Stefan Themerson wrote a "philosophical analysis" of Good's intelligence explosion thesis called Special Branch, published in 1972, but by "philosophical analysis" I have in mind a more analytic, argumentative kind of philosophical analysis than is found in Themerson's literary Special Branch ↩

Intelligence Amplification and Friendly AI

14 lukeprog 27 September 2013 01:09AM

Part of the series AI Risk and Opportunity: A Strategic Analaysis. Previous articles on this topic: Some Thoughts on Singularity Strategies, Intelligence enhancement as existential risk mitigation, Outline of possible Singularity scenarios that are not completely disastrous.

Below are my quickly-sketched thoughts on intelligence amplification and FAI, without much effort put into organization or clarity, and without many references.[1] But first, I briefly review some strategies for increasing the odds of FAI, one of which is to work on intelligence amplification (IA).

continue reading »

AI ebook cover design brainstorming

3 lukeprog 26 September 2013 11:49PM

Thanks to everyone who brainstormed possible titles for MIRI’s upcoming ebook on machine intelligence. Our leading contender for the book title is Smarter than Us: The Rise of Machine Intelligence.

What we need now are suggestions for a book cover design. AI is hard to depict without falling back on cliches, such as a brain image mixed with computer circuitry, a humanoid robot, HAL, an imitation of Creation of Adam with human and robot fingers touching, or an imitation of March of Progress with an AI at the far right.

A few ideas/examples:

  1. Something that conveys ‘AI’ in the middle (a computer screen? a server tower?) connected by arrow/wires/something to various ‘skills/actions/influences’, like giving a speech, flying unmanned spacecraft, doing science, predicting the stock market, etc., in an attempt to convey the diverse superpowers of a machine intelligence.

  2. A more minimalist text-only cover.

  3. A fairly minimal cover with just an ominous-looking server rack in the middle, with a few blinking lights and submerged in darkness around it. A bit like this cover.

  4. Similar to the above, except a server farm along the bottom fading into the background, with a frame composition similar to this.

  5. A darkened, machine-gunned room with a laptop sitting alone on a desk, displaying the text of the title on the screen. (This is the scene from the first chapter, about a Terminator who encounters an unthreatening-looking laptop which ends up being way more powerful and dangerous than the Terminator because it is more intelligent.)

Alex Vermeer sketched the first four of these ideas:

Some general inspiration may be found here.

We think we want something kinda dramatic, rather than cartoony, but less epic and unbelievable than the Facing the Intelligence Explosion cover.

Thoughts?

Help us Optimize the Contents of the Sequences eBook

11 lukeprog 19 September 2013 04:31AM

MIRI's ongoing effort to publish the sequences as an eBook has given us the opportunity to update their contents and organization.

We're looking for suggested posts to reorder, add, or remove.

To help with this, here is a breakdown of the current planned contents of the eBook and any currently planned modifications. Following that is a list of the most popular links within the sequences to posts that are not included therein.

Now's a good time to suggested changes or improvements!

———

Map and Territory

Added …What's a Bias Again? because it's meant to immediately follow Why Truth, And….

Mysterious Answers to Mysterious Questions

No changes.

A Human's Guide to Words

No changes.

How to Actually Change Your Mind

Politics is the Mind-Killer

Removed The Robbers Cave Experiment because it already appears in Death Spirals and the Cult Attractor, and there in the original chronological order which flows better.

Death Spirals and the Cult Attractor

Removed The Litany Against Gurus because it already appears in Politics is the Mind-killer.

Seeing with Fresh Eyes

Removed Asch's Conformity Experiment and Lonely Dissent because they both appear at the end of Death Spirals. Removed The Genetic Fallacy because it's in the Metaethics sequence: that's where it falls chronologically and it fits better there with the surrounding posts.

Noticing Confusion

Removed this entire subsequence because it is entirely contained within Mysterious Answers to Mysterious Questions.

Against Rationalization

Added Pascal's Mugging (before Torture vs Dust Specks) because it explains the 3^^^3 notation. Added Torture vs Dust Specks before A Case Study of Motivated Continuation because A Case Study refers to it frequently.

Against Doublethink

No changes.

Overly Convenient Excuses

Removed How to Convince Me that 2+2=3 because it's already in Map & Territory.

Letting Go

No change.

The Simple Math of Evolution

Added Evolutionary Psychology because it fits nicely at the end and it's referred to by other posts many times.

Challenging the Difficult

No change.

Yudkowsky's Coming of Age

No change.

Reductionism

No change. (Includes the Zombies subsequence.)

Quantum Physics

No change. Doesn't include any "Preliminaries" posts, since they'd all be duplicates

Metaethics

No change.

Fun Theory

No change.

The Craft and the Community

No change.

Appendix

Includes:

———

Here are the most-frequently-referenced links within the sequences to posts outside of the sequences (with a count of three or more). This may help you notice posts that you think should be included in the sequences eBook.

Suggestions?

Help us name a short primer on AI risk!

7 lukeprog 17 September 2013 08:35PM

MIRI will soon publish a short book by Stuart Armstrong on the topic of AI risk. The book is currently titled “AI-Risk Primer” by default, but we’re looking for something a little more catchy (just as we did for the upcoming Sequences ebook).

The book is meant to be accessible and avoids technical jargon. Here is the table of contents and a few snippets from the book, to give you an idea of the content and style:

  1. Terminator versus the AI
  2. Strength versus Intelligence
  3. What Is Intelligence? Can We Achieve It Artificially?
  4. How Powerful Could AIs Become?
  5. Talking to an Alien Mind
  6. Our Values Are Complex and Fragile
  7. What, Precisely, Do We Really (Really) Want?
  8. We Need to Get It All Exactly Right
  9. Listen to the Sound of Absent Experts
  10. A Summary
  11. That’s Where You Come In …

The Terminator is a creature from our primordial nightmares: tall, strong, aggressive, and nearly indestructible. We’re strongly primed to fear such a being—it resembles the lions, tigers, and bears that our ancestors so feared when they wandered alone on the savanna and tundra.

As a species, we humans haven’t achieved success through our natural armor plating, our claws, our razor-sharp teeth, or our poison-filled stingers. Though we have reasonably efficient bodies, it’s our brains that have made the difference. It’s through our social, cultural, and technological intelligence that we have raised ourselves to our current position.

Consider what would happen if an AI ever achieved the ability to function socially—to hold conversations with a reasonable facsimile of human fluency. For humans to increase their social skills, they need to go through painful trial and error processes, scrounge hints from more articulate individuals or from television, or try to hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. But it could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. Imagine a human who, every time they opened their mouth, had spent a solid year to ponder and research whether their response was going to be maximally effective. That is what a social AI would be like.

So, title suggestions?

Help MIRI run its Oxford UK workshop in November

6 lukeprog 15 September 2013 03:13AM

This November 23-29, MIRI is running its first European research workshop, at Oxford University.

We need somebody familiar with Oxford UK to (1) help us locate and secure lodging for the workshop participants ahead of time, (2) order food for delivery during the workshop, and (3) generally handle on-the-ground logistics.

Apply here for the chance to:

  1. Work with, and hang out with, MIRI staff.
  2. Spend some time (during breaks) with the workshop participants.
  3. Help MIRI work towards its goals.

You can either volunteer to help us for free, or indicate how much you'd need to be paid per hour to take the job.

How well will policy-makers handle AGI? (initial findings)

15 lukeprog 12 September 2013 07:21AM

Cross-posted from MIRI's blog.

MIRI's mission is "to ensure that the creation of smarter-than-human intelligence has a positive impact." One policy-relevant question is: How well should we expect policy makers to handle the invention of AGI, and what does this imply about how much effort to put into AGI risk mitigation vs. other concerns?

To investigate these questions, we asked Jonah Sinick to examine how well policy-makers handled past events analogous in some ways to the future invention of AGI, and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as with our project on how well we can plan for future decades. The post below is a summary of findings from our full email exchange (.docx) so far.

As with our investigation of how well we can plan for future decades, we decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we aren't yet able to draw any confident conclusions about our core questions.

The most significant results from this project so far are:

  1. We came up with a preliminary list of 6 seemingly-important ways in which a historical case could be analogous to the future invention of AGI, and evaluated several historical cases on these criteria.
  2. Climate change risk seems sufficiently disanalogous to AI risk that studying climate change mitigation efforts probably gives limited insight into how well policy-makers will deal with AGI risk: the expected damage of climate change appears to be very small relative to the the expected damage due to AI risk, especially when one looks at expected damage to policy makers.
  3. The 2008 financial crisis appears, after a shallow investigation, to be sufficiently analogous to AGI risk that it should give us some small reason to be concerned that policy-makers will not manage the invention of AGI wisely.
  4. The risks to critical infrastructure from geomagnetic storms are far too small to be in the same reference class with risks from AGI.
  5. The eradication of smallpox is only somewhat analogous to the invention of AGI.
  6. Jonah performed very shallow investigations of how policy-makers have handled risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis, but these cases need more study before even "initial thoughts" can be given.
  7. We identified additional historical cases that could be investigated in the future.

Further details are given below. For sources and more, please see our full email exchange (.docx).


6 ways a historical case can be analogous to the invention of AGI

In conversation, Jonah and I identified six features of the future invention of AGI that, if largely shared by a historical case, seem likely to allow the historical case to shed light on how well policy-makers will deal with the invention of AGI:

  1. AGI may become a major threat in a somewhat unpredictable time.
  2. AGI may become a threat when the world has very limited experience with it.
  3. A good outcome with AGI may require solving a difficult global coordination problem.
  4. Preparing for the AGI threat adequately may require lots of careful work in advance.
  5. Policy-makers have strong personal incentives to solve the AGI problem.
  6. A bad outcome with AGI would be a global disaster, and a good outcome with AGI would have global humanitarian benefit.

More details on these criteria and their use are given in the second email of our full email exchange.  


Risks from climate change

People began to see climate change as a potential problem in the early 1970s, but there was some ambiguity as to whether human activity was causing warming (because of carbon emissions) or cooling (because of smog particles). The first IPCC report was issued in 1990, and stated that were was substantial anthropogenic global warming due to greenhouse gases. By 2001, there was a strong scientific consensus behind this claim. While policy-makers' response to risks from climate change might seem likely to shed light on whether policy-makers will deal wisely with AGI, there are some important disanalogies:

  • The harms of global warming are expected to fall disproportionately on disadvantaged people in poor countries, not on policy-makers. So policy-makers have much less personal incentive to solve the problem than is the case with AGI.
  • In the median case, humanitarian losses from global warming seems to be about 20% of GDP per year for the poorest people. In light of anticipated economic development and marginal diminishing utility, this is a much smaller negative humanitarian impact than AGI risk (even ignoring future generations). For example, economist Indur Goklany estimated that "through 2085, only 13% of [deaths] from hunger, malaria, and extreme weather events (including coastal flooding from sea level rise) should be from [global] warming."
  • Thus, potential analogies to AGI risk come from climate change's tail risk. But there seem to be few credentialed scientists who have views compatible with a prediction that even a temperature increase in the 95th percentile of the probability distribution (by 2100) would do more than just begin to render some regions of Earth uninhabitable.
  • According to the 5th IPCC, the risk of human extinction from climate change seems very low: "Some thresholds that all would consider dangerous have no support in the literature as having a non-negligible chance of occurring. For instance, a 'runaway greenhouse effect'—analogous to Venus—appears to have virtually no chance of being induced by anthropogenic activities."
 

The 2008 financial crisis

Jonah did a shallow investigation of the 2008 financial crisis, but the preliminary findings are interesting enough for us to describe them in some detail. Jonah's impressions about the relevance of the 2008 financial crisis to the AGI situation are based on a reading of After the Music Stopped by Alan Blinder, who was the vice chairman of the federal reserve for 1.5 years during the Clinton administration. Naturally, many additional sources should be consulted before drawing firm conclusions about the relevance of policy-makers' handling of the financial crisis to their likelihood of handling AGI wisely.

Blinder's seven main factors leading to the recession are (p. 27):

  1. Inflated asset prices, especially of houses (the housing bubble) but also of certain securities (the bond bubble);
  2. Excessive leverage (heavy borrowing) throughout the financial system and the economy;
  3. Lax financial regulation, both in terms of what the law left unregulated and how poorly the various regulators performed their duties;
  4. Disgraceful banking practices in subprime and other mortgage lending;
  5. The crazy-quilt of unregulated securities and derivatives that were built on these bad mortgages;
  6. The abysmal performance of the statistical rating agencies, which helped the crazy-quilt get stitched together; and
  7. The perverse compensation systems in many financial institutions that created powerful incentives to go for broke.

With these factors in mind, let's look at the strength of the analogy between the 2008 financial crisis and the future invention of AGI:

  1. Almost tautologically, a financial crisis is unexpected, though we do know that financial crises happen with some regularity.
  2. The 2008 financial crisis was not unprecedented in kind, only in degree (in some ways).
  3. Avoiding the 2008 financial crisis would have required solving a difficult national coordination problem, rather than a global coordination problem. Still, this analogy seems fairly strong. As Jonah writes, "While the 2008 financial crisis seems to have been largely US specific (while having broader ramifications), there's a sense in which preventing it would have required solving a difficult coordination problem. The causes of the crisis are diffuse, and responsibility falls on many distinct classes of actors."
  4. Jonah's analysis wasn't deep enough to discern whether the 2008 financial crisis is analogous to the future invention of AGI with regard to how much careful work would have been required in advance to avert the risk.
  5. In contrast with AI risk, the financial crisis wasn't a life or death matter for almost any of the actors involved. Many people in finance didn't have incentives to avert the financial crisis: indeed, some of the key figures involved were rewarded with large bonuses. But it's plausible that government decision makers had incentive to avert a financial crisis for reputational reasons, and many interest groups are adversely affected by financial crises.
  6. Once again, the scale of the financial crisis wasn't on a par with AI risk, but it was closer to that scale than the other risks Jonah looked at in this initial investigation.

Jonah concluded that "the conglomerate of poor decisions [leading up to] the 2008 financial crisis constitute a small but significant challenge to the view that [policy-makers] will successfully address AI risk." His reasons were:

  1. The magnitude of the financial crisis is nontrivial (even if small) compared with the magnitude of the AI risk problem (not counting future generations).
  2. The financial crisis adversely affected a very broad range of people, apparently including a large fraction of those people in positions of power (this seems truer here than in the case of climate change). A recession is bad for most businesses and for most workers. Yet these actors weren't able to recognize the problem, coordinate, and prevent it.
  3. The reasons that policy-makers weren't able to recognize the problem, coordinate, and prevent it seem related to reasons why people might not recognize AI risk as a problem, coordinate, and prevent it. First, several key actors involved seem to have exhibited conspicuous overconfidence and neglect of tail risk (e.g. Summers, etc. ignoring Brooksley Born's warnings about excessive leverage). If true, this shows that people in positions of power are notably susceptible to overconfidence and neglect of tail risk. Avoiding overconfidence and giving sufficient weight to tail risk may be crucial in mitigating AI risk. Second, one gets a sense that bystander effect and tragedy of the commons played a large role in the case of the financial crisis. There are risks that weren't adequately addressed because doing so didn't fall under the purview of any of the existing government agencies. This may have corresponded to a mentality of the type "that's not my job — somebody else can take care of it." If people think that AI risk is large, then they might think "if nobody's going to take care of it then I will, because otherwise I'm going to die." But if people think that AI risk is small, they might think "This probably won't be really bad for me, and even though someone should take care of it, it's not going to be me."
 

Risks from geomagnetic storms

Large geomagnetic storms like the 1859 Carrington Event are infrequent, but could cause serious damage to satellites and critical infrastructure. See this OECD report for an overview.

Jonah's investigation revealed a wide range in expected losses from geomagnetic storms, from $30 million per year to $30 billion per year. But even this larger number amounts to $1.5 trillion in expected losses over the next 50 years. Compare this with the losses from the 2008 financial crisis (roughly a 1 in 50 years event), which are estimated to be about $13 trillion for Americans alone.

Though serious, the risks from geomagnetic storms appear to be small enough to be disanalogous to the future invention of AGI.  


The eradication of smallpox

Smallpox, after killing more than 500 million people over the past several millennia, was eradicated in 1979 after a decades-long global eradication effort. Though a hallmark of successful global coordination, it doesn't seem especially relevant to whether policy-makers will handle the invention of AGI wisely.

Here's how the eradication of smallpox does our doesn't fit our criteria for being analogous to the future invention of AGI:

  1. Smallpox didn't arrive at an unpredictable time; it arrived millennia before the eradication campaign.
  2. The world didn't have experience eradicating a disease before smallpox was eradicated, but a number of nations had eliminated smallpox.
  3. Smallpox eradication required solving a difficult global coordination problem, but in a way disanalogous to the invention of AGI safety (see the other points on this list).
  4. Preparing for smallpox eradication required effort in advance in some sense, but the effort had mostly already been exerted before the campaign was announced.
  5. Nations without smallpox had incentive to eradicate smallpox so that they didn't have to spend money to immunize citizens so that the virus would not be (re)-introduced to their countries. For example, in 1968, the United States spent about $100 million on routine smallpox vaccinations.
  6. Smallpox can be thought of as a global disaster: by 1966, about 2 million people died of smallpox each year.
 

Shallow investigations of risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis

Jonah's shallow investigation of risks from cyberwarfare revealed that experts disagree significantly about the nature and scope of these risks. It's likely that dozens of hours of research would be required to develop a well-informed model of these risks.

To investigate how policy-makers handled the discovery that chlorofluorocarbons (CFCs) depleted the ozone layer, Jonah summarized the first 100 pages of Ozone Crisis: The 15-Year Evolution of a Sudden Global Emergency (see our full email exchange for the summary). This historical case seems worth investigating further, and may be a case of policy-makers solving a global risk with surprising swiftness, though whether the response was appropriately prompt is debated.

Jonah also did a shallow investigation of the Cuban missile crisis. It's difficult to assess how likely it was for the crisis to escalate into a global nuclear war, but it appears that policy-makers made many poor decisions leading up to and during the Cuban missile crisis (see our full email exchange for a list). Jonah concludes:

even if the probability of the Cuban missile crisis leading to an all out nuclear war was only 1% or so, the risk was still sufficiently great so that the way in which the actors handled the situation is evidence against elites handling the creation of AI well. (This contrasts with the situation with climate change, in that elites had strong personal incentives to avert an all-out nuclear war.)

However, this is only a guess based on a shallow investigation, and should not be taken too seriously before a more thorough investigation of the historical facts can be made.  


Additional historical cases that could be investigated

We also identified additional historical cases that could be investigated for potentially informative analogies to the future invention of AGI:

  1. The 2003 Iraq War
  2. The frequency with which dictators are deposed or assassinated due to "unforced errors" they made
  3. Nuclear proliferation
  4. Recombinant DNA
  5. Molecular nanotechnology
  6. Near Earth objects
  7. Pandemics and potential pandemics (e.g. HIVSARS)

View more: Prev | Next