One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?

Some reasons for concern include:

  • Otherwise smart people say unreasonable things about AI safety.
  • Many people who believed AI was around the corner didn't take safety very seriously.
  • Elites have failed to navigate many important issues wisely (2008 financial crisis, climate change, Iraq War, etc.), for a variety of reasons.
  • AI may arrive rather suddenly, leaving little time for preparation.

But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):

  • If AI is preceded by visible signals, elites are likely to take safety measures. Effective measures were taken to address asteroid risk. Large resources are devoted to mitigating climate change risks. Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change. Availability of information is increasing over time.
  • AI is likely to be preceded by visible signals. Conceptual insights often take years of incremental tweaking. In vision, speech, games, compression, robotics, and other fields, performance curves are mostly smooth. "Human-level performance at X" benchmarks influence perceptions and should be more exhaustive and come more rapidly as AI approaches. Recursive self-improvement capabilities could be charted, and are likely to be AI-complete. If AI succeeds, it will likely succeed for reasons comprehensible by the AI researchers of the time.
  • Therefore, safety measures will likely be taken.
  • If safety measures are taken, then elites will navigate the creation of AI just fine. Corporate and government leaders can use simple heuristics (e.g. Nobel prizes) to access the upper end of expert opinion. AI designs with easily tailored tendency to act may be the easiest to build. The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI." Arms races not insurmountable.

The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)

Personally, I am not very comforted by this argument because:

  • Elites often fail to take effective action despite plenty of warning.
  • I think there's a >10% chance AI will not be preceded by visible signals.
  • I think the elites' safety measures will likely be insufficient.

Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.

In particular, I'd like to know:

  • Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
  • What are some good resources (e.g. books) for investigating the relevance of these analogies to AI risk (for the purposes of illuminating elites' likely response to AI risk)?
  • What are some good studies on elites' decision-making abilities in general?
  • Has the increasing availability of information in the past century noticeably improved elite decision-making?
Will the world's elites navigate the creation of AI just fine?
New Comment
266 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

RSI capabilities could be charted, and are likely to be AI-complete.

What does RSI stand for?

[-]gwern190

"recursive self improvement".

Okay, I've now spelled this out in the OP.

Lately I've been listening to audiobooks (at 2x speed) in my down time, especially ones that seem likely to have passages relevant to the question of how well policy-makers will deal with AGI, basically continuing this project but only doing the "collection" stage, not the "analysis" stage.

I'll post quotes from the audiobooks I listen to as replies to this comment.

7lukeprog
From Watts' Everything is Obvious:
7lukeprog
More (#1) from Everything is Obvious:
5lukeprog
More (#2) from Everything is Obvious:
3lukeprog
More (#4) from Everything is Obvious:
1lukeprog
More (#3) from Everything is Obvious:
6lukeprog
From Rhodes' Arsenals of Folly:
7lukeprog
More (#3) from Arsenals of Folly: And: And: And: And:
1Shmi
Amazing stuff. Was the world really as close to a nuclear war in 1983 as in 1962?
6lukeprog
More (#2) from Arsenals of Folly: And: And, a blockquote from the writings of Robert Gates:
5lukeprog
More (#1) from Arsenals of Folly: And: And: And:
4lukeprog
More (#4) from Arsenals of Folly:
5lukeprog
From Lewis' Flash Boys: So Spivey began digging the line, keeping it secret for 2 years. He didn't start trying to sell the line to banks and traders until a couple months before the line was complete. And then:
3lukeprog
More (#1) from Flash Boys: And: And:
5lukeprog
There was so much worth quoting from Better Angels of Our Nature that I couldn't keep up. I'll share a few quotes anyway.

More (#3) from Better Angels of Our Nature:

let’s have a look at political discourse, which most people believe has been getting dumb and dumber. There’s no such thing as the IQ of a speech, but Tetlock and other political psychologists have identified a variable called integrative complexity that captures a sense of intellectual balance, nuance, and sophistication. A passage that is low in integrative complexity stakes out an opinion and relentlessly hammers it home, without nuance or qualification. Its minimal complexity can be quantified by counting words like absolutely, always, certainly, definitively, entirely, forever, indisputable, irrefutable, undoubtedly, and unquestionably. A passage gets credit for some degree of integrative complexity if it shows a touch of subtlety with words like usually, almost, but, however, and maybe. It is rated higher if it acknowledges two points of view, higher still if it discusses connections, tradeoffs, or compromises between them, and highest of all if it explains these relationships by reference to a higher principle or system. The integrative complexity of a passage is not the same as the intelligence of the person who wrote it, but the

... (read more)
2[anonymous]
Further reading on integrative complexity: Wikipedia Psychlopedia Google book Now that I've been introduced to the concept, I want to evaluate how useful it is to incorporate into my rhetorical repertoire and vocabulary. And, to determine whether it can inform my beliefs about assessing the exfoliating intelligence of others (a term I'll coin to refer to that intelligence/knowledge which another can pass on to me to aid my vocabulary and verbal abstract reasoning - my neuropsychological strengths which I try to max out just like an RPG character). At a less meta level, knowing the strengths and weaknesses of the trait will inform whether I choose to signal it or dampen it from herein and in what situations. It is important for imitators to remember that whatever IC is associated with does not neccersarily imply those associations to lay others. strengths * conflict resolution (see Luke's post) As listed in psycholopedia: * appreciation of complexity * scientific profficiency * stress accomodationo * resistance to persuasion * prediction ability * social responsibliy * more initiative, as rated by managers, and more motivation to seek power, as gauged by a projective test weaknesses based on psychlopedia: * low scores on compliance and conscientiousness * seem antagonistic and even narcissistic based on the wiki article: * dependence (more likely to defer to others) * rational expectations (more likely to fallaciously assume they are dealing with rational agents) Upon reflection, here are my conclusions: * high integrative complexity dominates low integrative complexity for those who have insight into the concept and self aware of how it relates to them, others, and the capacity to use the skill and hide it. * the questions eliciting the answers that are expert rated to define the concept of IC by psychometricians is very crude and there ought to be a validated tool devised, if that is an achievable feat (cognitive complexity or time esti
9lukeprog
More (#4) from Better Angels of Our Nature:
0[anonymous]
Untrue unless you're in a non-sequential game True under a utilitarian framework and with a few common mind-theoretic assumptions derived from intuitions stemming from most people's empathy Woo
3lukeprog
More (#2) from Better Angels of Our Nature:
3lukeprog
More (#1) from Better Angels of Our Nature:
4lukeprog
From Ariely's The Honest Truth about Dishonesty:
1lukeprog
More (#1) from Ariely's The Honest Truth about Dishonesty: And:
0lukeprog
More (#2) from Ariely's The Honest Truth about Dishonesty: And:
4lukeprog
From Feynman's Surely You're Joking, Mr. Feynman:
4lukeprog
More (#1) from Surely You're Joking, Mr. Feynman: And: And:
4lukeprog
One quote from Taleb's AntiFragile is here, and here's another:
2lukeprog
AntiFragile makes lots of interesting points, but it's clear in some cases that Taleb is running roughshod over the truth in order to support his preferred view. I've italicized the particularly lame part:
3lukeprog
From Think Like a Freak:
3lukeprog
More (#1) from Think Like a Freak: And:
3lukeprog
From Rhodes' Twilight of the Bombs:
1lukeprog
More (#1) from Twilight of the Bombs: And: And: And: And:
3lukeprog
From Harford's The Undercover Economist Strikes Back: And:
1lukeprog
More (#2) from The Undercover Economist Strikes Back: And: And: And:
0lukeprog
More (#1) from The Undercover Economist Strikes Back: And:
3lukeprog
From Caplan's The Myth of the Rational Voter:
3lukeprog
More (#2) from The Myth of the Rational Voter:
1Prismattic
This is an absurdly narrow definition of self-interest. Many people who are not old have parents who are senior citizens. Men have wives, sisters, and daughters whose well-being is important to them. Etc. Self-interest != solipsistic egoism.
3lukeprog
More (#1) from The Myth of the Rational Voter: And:
2lukeprog
More (#3) from The Myth of the Rational Voter:
1Prismattic
Allow me to offer an alternative explanation of this phenomenon for consideration. Typically, when polled about their trust in insitutions, people tend to trust the executive branch more than the legislature or the courts, and they trust the military far more than they trust civilian government agencies. In the period before 9/11, our long national nightmare of peace and prosperity would generally have made the military less salient in people's minds, and the spectacles of impeachment and Bush v. Gore would have made the legislative and judicial branches more salient in people's minds. After 9/11, the legislative agenda quieted down/the legislature temporarily took a back seat to the executive, and military and national security organs became very high salience. So when people were asked about the government, the most immediate associations would have been to the parts that were viewed as more trustworthy.
3lukeprog
From Richard Rhodes' The Making of the Atomic Bomb:
5lukeprog
More (#2) from The Making of the Atomic Bomb: After Alexander Sachs paraphrased the Einstein-Szilard letter to Roosevelt, Roosevelt demanded action, and Edwin Watson set up a meeting with representatives from the Bureau of Standards, the Army, and the Navy... Upon asking for some money to conduct the relevant experiments, the Army representative launched into a tirade:
3lukeprog
More (#3) from The Making of the Atomic Bomb: Frisch and Peierls wrote a two-part report of their findings:
2lukeprog
More (#1) from The Making of the Atomic Bomb: On the origins of the Einstein–Szilárd letter: And:
0lukeprog
More (#5) from The Making of the Atomic Bomb:
0lukeprog
More (#4) from The Making of the Atomic Bomb: And: And: And: And:
2lukeprog
From Poor Economics:
2lukeprog
From The Visioneers: And: And: And:
2lukeprog
From Priest & Arkin's Top Secret America:
3lukeprog
More (#2) from Top Secret America: And, on JSOC: And: And:
2Shmi
I wonder if the security-industrial complex bureaucracy is any better in other countries.
0Lumifer
Which sense of "better" do you have in mind? :-)
0Shmi
More efficient.
0Lumifer
KGB had a certain aura, though I don't know if its descendants have the same cachet. Israeli security is supposed to be very good.
0lukeprog
Stay tuned; The Secret History of MI6 and Defend the Realm are in my audiobook queue. :)
0lukeprog
More (#1) from Top Secret America: And:
2lukeprog
From Pentland's Social Physics:
2lukeprog
More (#2) from Social Physics: And:
2lukeprog
More (#1) from Social Physics: And: And:
2lukeprog
From de Mesquita and Smith's The Dictator's Handbook:
1lukeprog
More (#2) from The Dictator's Handbook: And:
1lukeprog
More (#1) from The Dictator's Handbook:
2lukeprog
From Ferguson's The Ascent of Money:
2lukeprog
More (#1) from The Ascent of Money: And:
1gwern
The Medici Bank is pretty interesting. A while ago I wrote https://en.wikipedia.org/wiki/Medici_Bank on the topic; LWers might find it interesting how international finance worked back then.
2lukeprog
From Scahill's Dirty Wars:
1lukeprog
More (#2) from Dirty Wars: And: And:
1lukeprog
More (#1) from Dirty Wars: And: And: And:
0[anonymous]
Foreign fighters show up everywhere. And now there's the whole Islamic State issue. Perhaps all the world needs is more foreign legions doing good things. The FFL is overrecruited afterall. Heck, we could even deal with the refugee crisis by offering visas to those mercenaries. Sure as hell would be more popular than selling visas and citizenship cause people always get antsy about inequality and having less downward social comparisons.
2lukeprog
Passage from Patterson's Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock Market: But it proved all too easy: The very first tape Wang played revealed two dealers fixing prices.
2lukeprog
Some relevant quotes from Schlosser's Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety: And:
3lukeprog
More from Command and Control: And:
1lukeprog
More (#3) from Command and Control: And: And: And:
1lukeprog
More (#2) from Command and Control: And: And:
0lukeprog
More (#4) from Command and Control: And:
2Shmi
Do you keep a list of the audiobooks you liked anywhere? I'd love to take a peek.

Okay. In this comment I'll keep an updated list of audiobooks I've heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.

Outstanding:

Worthwhile if you care about the subject matter:

  • Singer, Wired for War (my clips)
  • Feinstein, The Shadow World (my clips)
  • Venter, Life at the Speed of Light (my clips)
  • Rhodes, Arsenals of Folly (my clips)
  • Weiner, Enemies: A History of the FBI (my clips)
  • Rhodes, The Making of the Atomic Bomb (available here) (my clips)
  • Gleick, Chaos (my clips)
  • Wiener, Legacy of Ashes: The History of the CIA (my clips)
  • Freese, Coal: A Human History (my clips)
  • Aid, The Secret Sentry (my clips)
  • Scahill, Dirty Wars (my clips)
  • Patterson, Dark Pools (my clips)
  • Lieberman, The Story of the Human Body
  • Pentland, Social Physics (my clips)
  • Okasha, Philosophy of Science: VSI
  • Mazzetti, The Way of the Knife
... (read more)

A process for turning ebooks into audiobooks for personal use, at least on Mac:

  1. Rip the Kindle ebook to non-DRMed .epub with Calibre and Apprentice Alf.
  2. Open the .epub in Sigil, merge all the contained HTML files into a single HTML file (select the files, right-click, Merge). Open the Source view for the big HTML file.
  3. Edit the source so that the ebook begins with the title and author, then jumps right into the foreword or preface or first chapter, and ends with the end of the last chapter or epilogue. (Cut out any table of contents, list of figures, list of tables, appendices, index, bibliography, and endnotes.)
  4. Remove footnotes if easy to do so, using Sigil's Regex find-and-replace (remember to use Minimal Match so you don't delete too much!). Click through several instances of the Find command to make sure it's going to properly cut out only the footnotes, before you click "Replace All."
  5. (Ignore italics here; it's added erroneously by LW.) Use find and replace to add [[slnc_1000]] at the end of every paragraph; Mac's text-to-speech engine interprets this as a slight pause, which aids in comprehension when I'm listening to the audiobook. Usually this
... (read more)
2Dr_Manhattan
VoiceDream for iPhone does a very fine job of text-to-speech; it also syncs your pocket bookmarks and can read epub files.
5lukeprog
Other: * Roose, Young Money. Too focused on a few individuals for my taste, but still has some interesting content. (my clips) * Hofstadter & Sander, Surfaces and Essences. Probably a fine book, but I was only interested enough to read the first and last chapters. * Taleb, AntiFragile. Learned some from it, but it's kinda wrong much of the time. (my clips) * Acemoglu & Robinson, Why Nations Fail. Lots of handy examples, but too much of "our simple theory explains everything." (my clips) * Byrne, The Many Worlds of Hugh Everett III (available here). Gave up on it; too much theory, not enough story. (my clips) * Drexler, Radical Abundance. Gave up on it; too sanitized and basic. * Mukherjee, The Emperor of All Maladies. Gave up on it; too slow in pace and flowery in language for me. * Fukuyama, The Origins of Political Order. Gave up on it; the author is more keen on name-dropping theorists than on tracking down data. * Friedman, The Moral Consequences of Economic Growth (available here). Gave up on it. There are some actual data in chs. 5-7, but the argument is too weak and unclear for my taste. * Tuchman, The Proud Tower. Gave up on it after a couple chapters. Nothing wrong with it, it just wasn't dense enough in the kind of learning I'm trying to do. * Foer, Eating Animals. I listened to this not to learn, but to shift my emotions. But it was too slow-moving, so I didn't finish it. * Caro, The Power Broker. This might end up under "outstanding" if I ever finish it. For now, I've put this one on hold because it's very long and not as highly targeted at the useful learning I want to be doing right now than some other books. * Rutherfurd, Sarum. This is the furthest I've gotten into any fiction book for the past 5 years at least, including HPMoR. I think it's giving my system 1 an education into what life was like in the historical eras it covers, without getting bogged down in deep characterization, complex plotting, or ornate environmental description.
2Shmi
Thanks! Your first 3 are not my cup of tea, but I'll keep looking through the top 1000 list. For now, I am listening to MaddAddam, the last part of Margaret Atwood's post-apocalyptic fantasy trilogy, which qrnyf jvgu bar zna qvfnccbvagrq jvgu uvf pbagrzcbenel fbpvrgl ervairagvat naq ercbchyngvat gur rnegu jvgu orggre crbcyr ur qrfvtarq uvzfrys. She also has some very good non-fiction, like her Massey lecture on debt, which I warmly recommend.
0Nick_Beckstead
Could you say a bit about your audiobook selection process?
1lukeprog
When I was just starting out in September 2013, I realized that vanishingly few of the books I wanted to read were available as audiobooks, so it didn't make sense for me to search Audible for titles I wanted to read: the answer was basically always "no." So instead I browsed through the top 2000 best-selling unabridged non-fiction audiobooks on Audible, added a bunch of stuff to my wishlist, and then scrolled through the wishlist later and purchased the ones I most wanted to listen to. These days, I have a better sense of what kind of books have a good chance of being recorded as audiobooks, so I sometimes do search for specific titles on Audible. Some books that I really wanted to listen to are available in ebook but not audiobook, so I used this process to turn them into audiobooks. That only barely works, sometimes. I have to play text-to-speech audiobooks at a lower speed to understand them, and it's harder for my brain to stay engaged as I'm listening, especially when I'm tired. I might give up on that process, I'm not sure. Most but not all of the books are selected because I expect them to have lots of case studies in "how the world works," specifically with regard to policy-making, power relations, scientific research, and technological development. This is definitely true for e.g. Command and Control, The Quest, Wired for War, Life at the Speed of Light, Enemies, The Making of the Atomic Bomb, Chaos, Legacy of Ashes, Coal, The Secret Sentry, Dirty Wars, The Way of the Knife, The Big Short, Worst-Case Scenarios, The Information, and The Idea Factory.
2ozziegooen
I definitely found out something similar. I've come to believe that most 'popular science', 'popular history' etc books are on audible, but almost anything with equations or code is not. The 'great courses' have been quite fantastic for me for learning about the social sciences. I found out about those recently. Occasionally I try podcasts for very niche topics (recent Rails updates, for instance), but have found them to be rather uninteresting in comparison to full books and courses.
0Nick_Beckstead
Thanks!
1lukeprog
From Singer's Wired for War:
1lukeprog
More (#7) from Wired for War: And:
0[anonymous]
The army recruiters say that soldiers on the ground still win wars. I reckon that Douhet's prediction will approach true, however, crudely. Drones.
0lukeprog
More (#6) from Wired for War: And:
-3[anonymous]
Inequality doesn't seem so bad now, huh?
0lukeprog
More (#5) from Wired for War:
0lukeprog
More (#4) from Wired for War: And:
0lukeprog
More (#3) from Wired for War: And: And:
0lukeprog
More (#2) from Wired for War:
0lukeprog
More (#1) from Wired for War: And:
0lukeprog
From Osnos' Age of Ambition: And: And:
0lukeprog
More (#2) from Osnos' Age of Ambition: And:
0lukeprog
More (#1) from Osnos' Age of Ambition: And: And: And:
0lukeprog
From Soldiers of Reason:
0lukeprog
More (#2) from Soldiers of Reason: And:
0lukeprog
More (#1) from Soldiers of Reason: And:
0lukeprog
From David and Goliath: And:
0lukeprog
More (#2) from David and Goliath: And:
0lukeprog
From Wade's A Troublesome Inheritance:
0lukeprog
More (#2) from A Troubled Inheritance:
0lukeprog
More (#1) from A Troublesome Inheritance: And:
0lukeprog
From Moral Mazes: And: And:
0lukeprog
From Lewis' The New New Thing: And:
0lukeprog
From Dartnell's The Knowledge: And: And: And:
0lukeprog
From Ayres' Super Crunchers, speaking of Epagogix, which uses neural nets to predict a movie's box office performance from its screenplay:
0lukeprog
More (#1) from Super Crunchers: And: And:
0lukeprog
From Isaacson's Steve Jobs: And: And: And:
0lukeprog
More (#1) from Steve Jobs: And: [no more clips, because Audible somehow lost all my bookmarks for the last two parts of the audiobook!]
0lukeprog
From Feinstein's The Shadow World:
0lukeprog
More (#8) from The Shadow World: And: And:
0lukeprog
More (#7) from The Shadow World: And: And: And: And:
0lukeprog
More (#6) from The Shadow World: And: And:
0lukeprog
More (#5) from The Shadow World: And: And: And:
0lukeprog
More (#4) from The Shadow World: And: And:
0lukeprog
More (#3) from The Shadow World: And: And:
0lukeprog
More (#2) from The Shadow World: And:
0lukeprog
More (#1) from The Shadow World: And: And: And:
0lukeprog
From Weiner's Enemies:
0lukeprog
More (#5) from Enemies: And:
0lukeprog
More (#4) from Enemies: And: And:
0lukeprog
More (#3) from Enemies: And: And:
0lukeprog
More (#2) from Enemies: And: And:
0lukeprog
More (#1) from Enemies: And: And:
0lukeprog
From Roose's Young Money:
0lukeprog
From Tetlock's Expert Political Judgment:
0lukeprog
More (#2) from Expert Political Judgment:
0lukeprog
More (#1) from Expert Political Judgment: And: And:
0lukeprog
From Sabin's The Bet: And:
0lukeprog
More (#3) from The Bet:
0lukeprog
More (#2) from The Bet: And: And:
0lukeprog
More (#1) from The Bet: And: And:
0lukeprog
From Yergin's The Quest:
0lukeprog
More (#7) from The Quest:
0lukeprog
More (#6) from The Quest: And: And: And: And:
0lukeprog
More (#5) from The Quest: And: And: And:
0lukeprog
More (#4) from The Quest: And:
0lukeprog
More (#3) from The Quest: And:
0lukeprog
More (#2) from The Quest: And: And: And:
0lukeprog
More (#1) from The Quest: And: And:
0lukeprog
From The Second Machine Age:
0lukeprog
More (#1) from The Second Machine Age:
0lukeprog
From Making Modern Science:
0lukeprog
More (#1) from Making Modern Science:
0lukeprog
From Johnson's Where Good Ideas Come From:
0lukeprog
From Gertner's The Idea Factory:
0lukeprog
More (#2) from The Idea Factory: And: And:
0lukeprog
More (#1) from The Idea Factory: And:
0somervta
I'm sure that I've seen your answer to this question somewhere before, but I can't recall where: Of the audiobooks that you've listened to, which have been most worthwhile?
0lukeprog
I keep an updated list here.
0lukeprog
I guess I might as well post quotes from (non-audio) books here as well, when I have no better place to put them. First up is Revolution in Science. Starting on page 45:
0Shmi
This amazingly high percentage of self-proclaimed revolutionary scientists (30% or more) seems like a result of selection bias, since most scientist with oversized egos are not even remembered. I wonder what fraction of actual scientists (not your garden-variety crackpots) insist on having produced a revolution in science.
0lukeprog
From Sunstein's Worst-Case Scenarios:
2lukeprog
More (#2) from Worst-Case Scenarios:
0lukeprog
More (#5) from Worst-Case Scenarios:
0lukeprog
More (#4) from Worst-Case Scenarios:
0lukeprog
More (#3) from Worst-Case Scenarios: And: Similar issues are raised by the continuing debate over whether certain antidepressants impose a (small) risk of breast cancer. A precautionary approach might seem to argue against the use of these drugs because of their carcinogenic potential. But the failure to use those antidepressants might well impose risks of its own, certainly psychological and possibly even physical (because psychological ailments are sometimes associated with physical ones as well). Or consider the decision by the Soviet Union to evacuate and relocate more than 270,000 people in response to the risk of adverse effects from the Chernobyl fallout. It is hardly clear that on balance this massive relocation project was justified on health grounds: "A comparison ought to have been made between the psychological and medical burdens of this measure (anxiety, psychosomatic diseases, depression and suicides) and the harm that may have been prevented." More generally, a sensible government might want to ignore the small risks associated with low levels of radiation, on the ground that precautionary responses are likely to cause fear that outweighs any health benefits from those responses - and fear is not good for your health. And:
0lukeprog
More (#1) from Worst-Case Scenarios: But at least so far in the book, Sunstein doesn't mention the obvious rejoinder about investing now to prevent existential catastrophe. Anyway, another quote:
0lukeprog
From Gleick's Chaos:
0lukeprog
More (#3) from Chaos: And:
0lukeprog
More (#2) from Chaos: And: And:
0lukeprog
More (#1) from Chaos:
0lukeprog
From Lewis' The Big Short:
0lukeprog
More (#4) from The Big Short: And: And: And:
0lukeprog
More (#3) from The Big Short: And: And: And:
0lukeprog
More (#2) from The Big Short: And: And:
0lukeprog
More (#1) from The Big Short: And:
0lukeprog
From Gleick's The Information:
2lukeprog
More (#1) from The Information: And: And: And, an amusing quote:
0lukeprog
From Acemoglu & Robinson's Why Nations Fail:
0lukeprog
More (#2) from Why Nations Fail: And:
0lukeprog
More (#1) from Why Nations Fail: And: And: And:
0lukeprog
From Greenblatt's The Swerve: How the World Became Modern:
2lukeprog
More (#1) from The Swerve:
0lukeprog
From Aid's The Secret Sentry:
0lukeprog
More (#6) from The Secret Sentry: And: And: And:
0lukeprog
More (#5) from The Secret Sentry: And:
0lukeprog
More (#4) from The Secret Sentry: And:
0lukeprog
More (#3) from The Secret Sentry: And: And: Even when enemy troops and tanks overran the major South Vietnamese military base at Bien Hoa, outside Saigon, on April 26, Martin still refused to accept that Saigon was doomed. On April 28, Glenn met with the ambassador carry ing a message from Allen ordering Glenn to pack up his equipment and evacuate his remaining staff immediately. Martin refused to allow this. The following morning, the military airfield at Tan Son Nhut fell, cutting off the last air link to the outside.
0lukeprog
More (#2) from The Secret Sentry: And: And:
0lukeprog
More (#1) from The Secret Sentry:
0lukeprog
From Mazzetti's The Way of the Knife:
0lukeprog
More (#5) from The Way of the Knife: And: And:
0lukeprog
More (#4) from The Way of the Knife: And: And: And: And: And: And:
0lukeprog
More (#3) from The Way of the Knife:
0lukeprog
More (#2) from The Way of the Knife: And:
0lukeprog
More (#1) from The Way of the Knife: And: And:
0lukeprog
From Freese's Coal: A Human History:
0lukeprog
More (#2) from Coal: A Human History:
0lukeprog
More (#1) from Coal: A Human History:
0lukeprog
Passages from The Many Worlds of Hugh Everett III: And: (It wasn't until decades later that David Deutsch and others showed that Everettian quantum mechanics does make novel experimental predictions.)
0lukeprog
A passage from Tim Weiner's Legacy of Ashes: The History of the CIA:
0lukeprog
More (#1) from Legacy of Ashes: And: And: And:
0lukeprog
I shared one quote here. More from Life at the Speed of Light:
0lukeprog
Also from Life at the Speed of Light:

Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.

This seems obviously false. Local expenditures - of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.

RSI capabilities could be charted, and are likely to be AI-complete.

This is to be taken as an arguendo, not as the author's opinion, right? See IEM on the minimal conditions for takeoff. Albeit if &q... (read more)

6Benya
Climate change doesn't have the aspect that "if this ends up being a problem at all, then chances are that I (or my family/...) will die of it". (Agree with the rest of the comment.)
2Eliezer Yudkowsky
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous). Many others do not believe it about AI. Many put sizable climate-change disaster into the far future. Many people will go on believing this AI independently of any evidence which accrues. Actors with something to gain by minimizing their belief in climate change so minimize. This has also been true in AI risk so far.
0Benya
Hm! I cannot recall a single instance of this. (Hm, well; I can recall one instance of a TV interview with a politician from a non-first-world island nation taking projections seriously which would put his nation under water, so it would not be much of a stretch to think that he's taking seriously the possibility that people close to him may die from this.) If you have, probably this is because I haven't read that much about what people say about climate change. Could you give me an indication of the extent of your evidence, to help me decide how much to update? Ok, agreed, and this still seems likely even if you imagine sensible AI risk analyses being similarly well-known as climate change analyses are today. I can see how it could lead to an outcome similar to today's situation with climate change if that happened... Still, if the analysis says "you will die of this", and the brain of the person considering the analysis is willing to assign it some credence, that seems to align personal selfishness with global interests more than (climate change as it has looked to me so far).
7Eliezer Yudkowsky
Will keep an eye out for the next citation. This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don't automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles. People only avoid certain sorts of death risks under certain circumstances.
4Benya
Thanks! Point. Need to think.
3Eugine_Nier
Being told something is dangerous =/= believing it is =/= alieving it is.
2lukeprog
Right. I'll clarify in the OP.
1[anonymous]
This seems implied by X-complete. X-complete generally means "given a solution to an X-complete problem, we have a solution for X". eg. NP complete: given a polynomial solution to any NP-complete problem, any problem in NP can be solved in polynomial time. (Of course the technical nuance of the strength of the statement X-complete is such that I expect most people to imagine the wrong thing, like you say.)

(I don't have answers to your specific questions, but here are some thoughts about the general problem.)

I agree with most of you said. I also assign significant probability mass to most parts of the argument for hope (but haven't thought about this enough to put numbers on this), though I too am not comforted on these parts because I also assign non-small chance to them going wrong. E.g., I have hope for "if AI is visible [and, I add, AI risk is understood] then authorities/elites will be taking safety measures".

That said, there are some steps in... (read more)

I personally am optimistic about the world's elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:

  1. I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.

  2. AI risk is a Global Catastrophic Risk i

... (read more)

The people with the most power tend to be the most rational people

What?

7JonahS
Rationality is systematized winning. Chance plays a role, but over time it's playing less and less of a role, because of more efficient markets.
9Decius
There is lots of evidence that people in power are the most rational, but there is a huger prior to overcome. Among people for whom power has an unsatiated major instrumental or intrinsic value, the most rational tend to have more power- but I don't think that very rational people are common and I think that they are less likely to want more power than they have. Particularly since the previous generation of power-holders used different factors when they selected their successors.
2JonahS
I agree with all of this. I think that "people in power are the most rational" was much less true in 1950 than it is today, and that it will be much more true in 2050.
6elharo
Actually that's a badly titled article. At best "Rationality is systematized winning" applies to instrumental, not epistemic, rationality. And even for that you can't make rationality into systematized winning by defining it so. Either that's a tautology (whatever systematized winning is, we define that as "rationality") or it's an empirical question. I.e. does rationality lead to winning? Looking around the world at "winners", that seems like a very open question. And now that I think about it, it's also an empirical question whether there even is a system for winning. I suspect there is--that is, I suspect that there are certain instrumental practices one can adopt that are generically useful for achieving a broad variety of life goals--but this too is an empirical question we should not simply assume the answer to.
1JonahS
I agree that my claim isn't obvious. I'll try to get back to you with detailed evidence and arguments.
5ChrisHallquist
The problem is that politicians have a lot to gain from really believing the stupid things they have to say to gain and hold power. To quote an old thread: Cf. Stephen Pinker historians who've studied Hitler tend to come away convinced he really believed he was a good guy. To get the fancy explanation of why this is the case, see "Trivers' Theory of Self-Deception."
6lukeprog
It's not much evidence, but the two earliest scientific investigations of existential risk I know of, LA-602 and the RHIC Review, seem to show movement in the opposite direction: "LA-602 was written by people curiously investigating whether a hydrogen bomb could ignite the atmosphere, and the RHIC Review is a work of public relations." Perhaps the trend you describe is accurate, but I also wouldn't be surprised to find out (after further investigation) that scientists are now increasingly likely to avoid serious analysis of real risks posed by their research, since they're more worried than ever before about funding for their field (or, for some other reason). The AAAI Presidential Panel on Long-Term AI Futures was pretty disappointing, and like the RHIC Review seems like pure public relations, with a pre-determined conclusion and no serious risk analysis.
3ryjm
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh. This is assuming that people understand what makes an AI so dangerous - calling an AI a global catastrophic risk isn't going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial). I think you're just blurring "rationality" here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don't see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don't know what to say), and especially of the kind needed to properly handle AI - and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don't give two shits about AI risk - if they don't think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren't thinking about it now - why are you confident this won't be the case in the future? Thinking about AI requires a rather large conceptual leap - "rationality" is necessary but not sufficient, so even if all powerful people were "rational" it doesn't follow that they can deal with these issues properly or even single them out as something
1JonahS
Thanks for engaging. The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well. I think that people will understand what makes AI dangerous. The arguments aren't difficult to understand. Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean "rational with respect to being able to run a country," which is relevant), and I expect this trend to continue. Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power. Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long. I agree that AI safety requires a substantial shift in perspective — what I'm claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent. You don't need "most people" to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn't the most prestigious field. If political leaders are sufficiently rational (as I expect them to be), they'll give research grants and prestige to people who work on AI safety.
3wubbles
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950's. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
0JonahS
I agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
3Baughn
World war three seems certain to significantly decrease human population. From my point of view, I can't eliminate anthropic reasoning for why there wasn't such a war before I was born.
2Desrtopa
We still get people occasionally who argue the point while reading through the Sequences, and that's a heavily filtered audience to begin with.
3JonahS
There's a difference between "sufficiently difficult so that a few readers of one person's exposition can't follow it" and "sufficiently difficult so that after being in the public domain for 30 years, the arguments won't have been distilled so as to be accessible to policy makers." I don't think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I'd concede that this is not immediately obvious.
1hairyfigment
And I have the impression that relatively low-ranking people helped produce this outcome by keeping information from their superiors. Petrov chose not to report a malfunction of the early warning system until he could prove it was a malfunction. People during the Korean war and possibly Vietnam seem not to have passed on the fact that pilots from Russia or America were cursing in their native languages over the radio (and the other side was hearing them). This in fact is part of why I don't think we 'survived' through the anthropic principle. Someone born after the end of the Cold War could look back at the apparent causes of our survival. And rather than seeing random events, or no causes at all, they would see a pattern that someone might have predicted beforehand, given more information. This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.
0JonahS
As I remark elsewhere in this thread, the point is that I would have expected substantially more nuclear exchange by now than actually happened, and in view of this, I updated in the direction of things being more likely to go well than I would have thought. I'm not saying "the fact that there haven't been nuclear exchanges means that destructive things can't happen." I was using the nuclear war thing as one of many outside views, not as direct analogy. The AI situation needs to be analyzed separately — this is only one input.
-3FeepingCreature
It may be challenging to estimate the "actual, at the time" probability of a past event that would quite possibly have resulted in you not existing. Survivor bias may play a role here.
1JonahS
Nuclear war would have to be really, really big to kill a majority of the population, and probably even if all weapons were used the fatality rate would be under 50% (with the uncertainty coming from nuclear winter). Note that most residents of Hiroshima and Nagasaki survived the 1945 bombings, and that fewer than 60% of people live in cities.
0elharo
It depends on the nuclear war. An exchange of bombs between India and Pakistan probably wouldn't end human life on the planet. However an all-out war between the U.S. and the U.S.S.R in the 1980s most certainly could have. Fortunately that doesn't seem to be a big risk right now. 30 years ago it was. I don't feel confident in any predictions one way or the other about whether this might be a threat again 30 years from now.
7JonahS
Why do you think this?
0elharo
Because all the evidence I've read or heard (most of it back in the 1980s) agreed on this. Specifically in a likely exchange between the U.S. and the USSR the northern, hemisphere would have been rendered completely uninhabitable within days. Humanity in the southern hemisphere would probably have lasted somewhat longer, but still would have been destroyed by nuclear winter and radiation. Details depend on the exact distribution of targets. Remember Hiroshima and Nagasaki were 2 relatively small fission weapons. By the 1980s the USSR and the US each had enough much bigger fusion bombs to individually destroy the planet. The only question was how many each would use in an exchange and where they target them.
4JonahS
This is mostly out of line with what I've read. Do you have references?
0FeepingCreature
I'm not sure what the correct way to approach this would be. I think it may be something like comparing the number of people in your immediate reference class - depending on preference, this could be "yourself precisely" or "everybody who would make or have made the same observation as you" - and then ask "how would nuclear war affect the distribution of such people in that alternate outcome". But that's only if you give each person uniform weighting of course, which has problems of its own.
1JonahS
Sure, these things are subtle — my point was that the numbers who would have perished isn't very large in this case, so that under a broad class of assumptions, one shouldn't take the observed absence of nuclear conflict to be a result of survivorship bias.
[-][anonymous]60

The argument from hope or towards hope or anything but despair and grit is misplaced when dealing with risks of this magnitude.

Don't trust God (or semi-competent world leaders) to make everything magically turn out all right. The temptation to do so is either a rationalization of wanting to do nothing, or based on a profoundly miscalibrated optimism for how the world works.

/doom

-2Eugine_Nier
I agree. Of course the article you linked to ultimately attempts to argue for trusting semi-competent world leaders.
0[anonymous]
It alludes to such an argument and sympathizes with it. Note I also "made the argument" that civilization should be dismantled. Personally I favor the FAI solution, but I tried to make the post solution-agnostic and mostly demonstrate where those arguments are coming from, rather than argue any particular one. I could have made that clearer, I guess. Thanks for the feedback.

I think there's a >15% chance AI will not be preceded by visible signals.

Aren't we seeing "visible signals" already? Machines are better than humans at lots of intelligence-related tasks today.

2jsalvatier
I interpreted that as 'visible signals of danger', but I could be wrong.

Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.

Cryptography and cryptanalysis are obvious precursors of supposedly-dangerous tech within IT.

Looking at their story, we can plausibly expect governments to attempt to delay the development of "weaponizable" technology by others.

These days, cryptography facilitates international trade. It seems like a mostly-positive force overall.

One question is whether AI is like CFCs, or like CO2, or like hacking.

With CFCs, the solution was simple: ban CFCs. The cost was relatively low, and the benefit relatively high.

With CO2, the solution is equally simple: cap and trade. It's just not politically palatable, because the problem is slower-moving, and the cost would be much, much greater (perhaps great enough to really mess up the world economy). So, we're left with the second-best solution: do nothing. People will die, but the economy will keep growing, which might balance that out, because ... (read more)

Here are my reasons for pessimism:

  1. There are likely to be effective methods of controlling AIs that are of subhuman or even roughly human-level intelligence which do not scale up to superhuman intelligence. These include for example reinforcement by reward/punishment, mutually beneficial trading, legal institutions. Controlling superhuman intelligence will likely require qualitatively different methods, such as having the superintelligence share our values. Unfortunately the existence of effective but unscalable methods of AI control will probably lull el

... (read more)

Congress' non-responsiveness to risks to critical infrastructure from geomagnetic storms, despite scientific consensus on the issue, is also worrying.

0wedrifid
Perhaps someone could convince congress that "Terrorists" had developed "geomagnetic weaponry" and new "geomagnetic defence systems" need to be implemented urgently. (Being seen to be) taking action to defend against the hated enemy tends to be more motivating than worrying about actual significant risks.

Even if one organization navigates the creation of friendly AI successfully, won't we still have to worry about preventing anyone from ever creating an unsafe AI?

Unlike nuclear weapons, a single AI might have world ending consequences, and an AI requires no special resources. Theoretically a seed AI could be uploaded to Pirate Bay, from where anyone could download and compile it.

8Manfred
If the friendly AI comes first, the goal is for it to always have enough resources to be able to stop unsafe AIs from being a big risk.
2Benya
Upvoted, but "always" is a big word. I think the hope is more for "as long as it takes until humanity starts being capable of handling its shit itself"...
3Benya
Why the downvotes? Do people feel that "the FAI should at some point fold up and vanish out of existence" is so obvious that it's not worth pointing out? Or disagree that the FAI should in fact do that? Or feel that it's wrong to point this out in the context of Manfred's comment? (I didn't mean to suggest that Manfred disagrees with this, but felt that his comment was giving the wrong impression.)
5Pentashagon
Will sentient, self-interested agents ever be free from the existential risks of UFAI/intelligence amplification without some form of oversight? It's nice to think that humanity will grow up and learn how to get along, but even if that's true for 99.9999999% of humans that leaves 7 people from today's population who would probably have the power to trigger their own UFAI hard takeoff after a FAI fixes the world and then disappears. Even if such a disaster could be stopped it is a risk probably worth the cost of keeping some form of FAI around indefinitely. What FAI becomes is anyone's guess but the need for what FAI does will probably not go away. If we can't trust humans to do FAI's job now, I don't think we can trust humanity's descendents to do FAI's job either, just from Loeb's theorem. I think it is unlikely that humans will become enough like FAI to properly do FAI's job. They would essentially give up their humanity in the process.
3Eliezer Yudkowsky
A secure operating system for governed matter doesn't need to take the form of a powerful optimization process, nor does verification of transparent agents trusted to run at root level. Benja's hope seems reasonable to me.
6Wei Dai
This seems non-obvious. (So I'm surprised to see you state it as if it was obvious. Unless you already wrote about the idea somewhere else and are expecting people to pick up the reference?) If we want the "secure OS" to stop posthumans from running private hell simulations, it has to determine what constitutes a hell simulation and successfully detect all such attempts despite superintelligent efforts at obscuration. How does it do that without being superintelligent itself? This sounds interesting but I'm not sure what it means. Can you elaborate?
4Eliezer Yudkowsky
Hm, that's true. Okay, you do need enough intelligence in the OS to detect certain types of simulations / and/or the intention to build such simulations, however obscured. If you can verify an agent's goals (and competence at self-modification), you might be able to trust zillions of different such agents to all run at root level, depending on what the tiny failure probability worked out to quantitatively.
0Pentashagon
That means each non-trivial agent would become the FAI for its own resources. To see the necessity of this imagine what initial verification would be required to allow an agent to simulate its own agents. Restricted agents may not need a full FAI if they are proven to avoid simulating non-restricted agents, but any agent approaching the complexity of humans would need the full FAI "conscience" running to evaluate its actions and interfere if necessary. EDIT: "interfere" is probably the wrong word. From the inside the agent would want to satisfy the FAI goals in addition to its own. I'm confused about how to talk about the difference between what an agent would want and what an FAI would want for all agents, and how it would feel from the inside to have both sets of goals.
1Benya
I'd hope so, since I think I got the idea from you :-) This is tangential to what this thread is about, but I'd add that I think it's reasonable to have hope that humanity will grow up enough that we can collectively make reasonable decisions about things affecting our then-still-far-distant future. To put it bluntly, if we had an FAI right now I don't think it should be putting a question like "how high is the priority of sending out seed ships to other galaxies ASAP" to a popular vote, but I do think there's reasonable hope that humanity will be able to make that sort of decision for itself eventually. I suppose this is down to definitions, but I tend to visualize FAI as something that is trying to steer the future of humanity; if humanity eventually takes on the responsibility for this itself, then even if for whatever reason it decides to use a powerful optimization process for the special purpose of preventing people from building uFAI, it seems unhelpful to me to gloss this without more qualification as "the friendly AI [... will always ...] stop unsafe AIs from being a big risk", because the latter just sounds to me like we're keeping around the part where it steers the fate of humanity as well.
0Benya
Thanks for explaning the reasoning! I do agree that it seems quite likely that even in the long run, we may not want to modify ourselves so that we are perfectly dependable, because it seems like that would mean getting rid of traits we want to keep around. That said, I agree with Eliezer's reply about why this doesn't mean we need to keep an FAI around forever; see also my comment here. I don't think Löb's theorem enters into it. For example, though I agree that it's unlikely that we'd want to do so, I don't believe Löb's theorem would be an obstacle to modifying humans in a way making them super-dependable.

The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI."

What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?

If "AI safety problems" here do not refer to FAI problems, then how do those problems get solved, according to this argument?

0timtyler
We see pretty big boosts already, IMO - largely by facilitating networking effects. Idea recombination and testing happen faster on the internet.
[-][anonymous]00

@Lukeprog, can you

(1) update us on your working answers the posed questions in brief? (2) your current confidence (and if you would like to, by proxy, MIRI's as an organisation's confidence in each of the 3:

Elites often fail to take effective action despite plenty of warning.

I think there's a >10% chance AI will not be preceded by visible signals.

I think the elites' safety measures will likely be insufficient.

Thank you for your diligence.

There's another reason for hope in this above global warming: The idea of a dangerous AI is already common in the public eye as "things we need to be careful about." A big problem the global warming movement had, and is still having, is convincing the public that it's a threat in the first place.

Who do you mean by "elites". Keep in mind that major disruptive technical progress of the type likely to precede the creation of a full AGI tends to cause the type of social change that shakes up the social hierarchy.

[-][anonymous]-30

Combining the beginning and the end of your questions reveals an answer.

Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of [nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars] just fine?

Answer how just fine any of these are any you have analogous answers.

You might also clarify whether you are interested in what is just fine for everyone, or just fine for the elites, or just fine for the AI in question. The answer will change accordingly.