MIRI will soon publish a short book by Stuart Armstrong on the topic of AI risk. The book is currently titled “AI-Risk Primer” by default, but we’re looking for something a little more catchy (just as we did for the upcoming Sequences ebook).

The book is meant to be accessible and avoids technical jargon. Here is the table of contents and a few snippets from the book, to give you an idea of the content and style:

  1. Terminator versus the AI
  2. Strength versus Intelligence
  3. What Is Intelligence? Can We Achieve It Artificially?
  4. How Powerful Could AIs Become?
  5. Talking to an Alien Mind
  6. Our Values Are Complex and Fragile
  7. What, Precisely, Do We Really (Really) Want?
  8. We Need to Get It All Exactly Right
  9. Listen to the Sound of Absent Experts
  10. A Summary
  11. That’s Where You Come In …

The Terminator is a creature from our primordial nightmares: tall, strong, aggressive, and nearly indestructible. We’re strongly primed to fear such a being—it resembles the lions, tigers, and bears that our ancestors so feared when they wandered alone on the savanna and tundra.

As a species, we humans haven’t achieved success through our natural armor plating, our claws, our razor-sharp teeth, or our poison-filled stingers. Though we have reasonably efficient bodies, it’s our brains that have made the difference. It’s through our social, cultural, and technological intelligence that we have raised ourselves to our current position.

Consider what would happen if an AI ever achieved the ability to function socially—to hold conversations with a reasonable facsimile of human fluency. For humans to increase their social skills, they need to go through painful trial and error processes, scrounge hints from more articulate individuals or from television, or try to hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. But it could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. Imagine a human who, every time they opened their mouth, had spent a solid year to ponder and research whether their response was going to be maximally effective. That is what a social AI would be like.

So, title suggestions?

New Comment
75 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I like "Smarter than Us: an overview of AI Risk". The first three words should knock the reader out of their comfort zone.

3ChrisHallquist
I concur on the main title, but, in accordance with cousin_it's comment below, we might go with AI as a Danger to Mankind as a subtitle or something like that. Maybe AI's Promise and Peril for Humanity to avoid (a) giving people the impression we think AI should never be built (b) the charge of sexism. Note that "promise and peril" is Kurzweil's turn of phrase; it sounds much better in my head than "promise and danger" which I also thought of.
1MichaelAnissimov
Sexism..?
7NancyLebovitz
Yes, sexism. "Mankind" is male-tilted in a way that "humanity" isn't.
[-]So8res210

These suggestions lean towards sensationalism:

  • Losing the Future: The Potential of AI
  • The Power of Intelligence: an overview of AI Risk
  • Smarter than Us: an overview of AI Risk
  • The Fragile Future: an overview of AI Risk
  • An introduction to superhuman intelligence and the risks it poses
9lukeprog
The Power of Intelligence: A.I. as a Danger to Mankind might be good, too...
4lincolnquirk
Along the lines of "Fragile Future" - I like alliteration: * The Common Cause: how artificial intelligence will save the world -- or destroy it. (neat double meaning, maybe a bit too abstracted) * The Digital Demon (uhm... a bit too personified) * The Silicon Satan (okay, this is getting ridiculous) Honestly I really like Fragile Future though.

My model of people who are unaware of AI risk says that they will understand a title like "Artificial intelligence as a danger to mankind".

2lukeprog
Artificial Intelligence as a Danger to Mankind seems pretty good, if we think it's good to emphasize the risk angle in the title. Though unlike many publishers, I'll also be getting the author's approval before choosing a title.
2jmmcd
"X as a Y" is an academic idiom. Sounds wrong for the target audience.
0Stuart_Armstrong
Don't have "robot" in the title, or anything that pattern matches to the Terminator (unless it's specifically to draw a contrast).
1Randaly
Possibly emphasize 'risk' as opposed to 'danger'? "The Risks of Artificial Intelligence Development"? "Risks from the Development of Superhuman AI"?
0loup-vaillant
Or, "Artificial intelligence as a risk to mankind". (Without the emphasis.)

I don't have anything good but I think the sweet spot is something that kinda draws in people who'd be excited about mainstream worries about AI, but implies there's a twist.

  • Blue Screen of Death... Forever: A Guide to AI Risk
  • Life or Death Programming: The Future of AI Risk
  • Life or Death Philosophy: The Future of AI Risk
  • Decision Theory Xor Death
  • Cogito Ergo Doom: The Unexpected Risks of AI
  • Worse than Laser Eyes: The Real Risks of AI

Cogito Ergo Doom

NIce.

1Stuart_Armstrong
Sigh... this makes me realise how untalented I am at finding titles!
4palladias
Practice practice practice! I've had to find titles for daily blog posts for three years.
0John_Maxwell
I like this one as "Blue Screen of Death: A Primer on AI Risk". "Have you read Blue Screen of Death?" There's something appealing about a book that doesn't take itself too seriously, IMO.
[-]gjm160

I don't like all the clever-clever titles being proposed because (1) they probably restrict the audience and (2) one of the difficulties MIRI faces is persuading people to take the risk seriously in the first place -- which will not be helped by a title that's flippant, or science-fiction-y, or overblown, or just plain confusing.

You don't need "primer" or anything like it in the title; if the book has a fairly general title, and is short, and has a preface that begins "This book is an introduction to the risks posed by artificial intelligence" or something, you're done. (No harm in having something like "primer" or "introduction" in the title, if that turns out to make a good title.)

Spell out "artificial intelligence". (Or use some other broadly equivalent term.)

I would suggest simply "Risks of artificial intelligence" or maybe "Risks of machine intelligence" (matching MIRI's name).

3Paul Crowley
I take your point, but it looks like the book they've decided to write is one that's at least a little flippant and science-fiction-y, and that being so the title should reflect that.
2Stuart_Armstrong
The Terminator section is to counter that issue immediately, rather than being sci-fi ish.
2palladias
I think titles also follow the "the only goal of the first sentence is to make the reader want to read the second sentence" rule. If MIRI is pitching this book at bright laypeople, I think it's good to be a bit jazzy and then dismantle the Skynet assumptions early on (as it looks like this does). If the goal is for it to be a technical manual for people in math and CS, I'd agree that anything that sounds like pop sci or Gladwell is probably a turn-off. Of course, you could always have two editions, with two titles (and differing amounts of LaTeX)
1ChrisHallquist
These are reasonable concerns, but a boring title will restrict the audience in its own way. Michael's "Smarter than Us" suggestion avoids both risks, though, I think. Edit: Wait, that wasn't Michael's idea originally, he was just endorsing it, but I agree with his endorsement and reasoning why. Definitely sends shivers down my spine.

To Serve Man: an overview of AI Risk

Maybe:

I'm sorry Dave, I'm doing exactly what you asked me

(followed by a dull but informative "risks of artificial intelligence"-style subtitle)

"The Last Machine: Why Artificial Intelligence Might Just Wipe Us All Out"

It could include a few cartoons of robots destroying us all while saying things like:

"I do not hate you, but you are made of atoms I can use for something else."
"I am built to maximise human happiness, so unhappy people must die."
"Must...make...paperclips!"
"Muahahahaha! I will grant ALL your wishes!!!"

I strongly advocate eliminating the word 'risk' from the title. I have never spoken of 'AI risk'.

It is a defensive word and in a future-of-technology context it communicates to people that you are about to talk about possible threats that no amount of argument will talk you out of. Only people who like the 'risk' dogwhistle will read, and they probably won't like the content.

  • What We Can Know About Powerful Artificial Intelligence
  • Powerful Artificial Intelligence: Why Its Friendliness or Hostility is Knowably Design-Dependent
  • Foreseeable Difficulties of Having AI Be A Good Thing
  • Friendly AI: Possible But Difficult
6Stuart_Armstrong
None of these titles seem likely to grip people...
0lukeprog
I like Friendly AI: Possible But Difficult best, but given your text, it might need to be Good Artificial Intelligence: Possible But Difficult. But I agree these are unlikely to grip people. Maybe just The Rise of Superintelligence?
0Paul Crowley
Apt to be confused with Bostrom's forthcoming book?
6ESRogs
I notice that I am confused.
4Eliezer Yudkowsky
"AI as a positive and negative factor in global risk", in a book called "Global Catastrophic Risks". The phrase 'AI risk' does not appear in the text. If I'd known then what I know now, I would have left the word 'risk' out of the title entirely.
2ESRogs
Confusion cleared :)
0John_Maxwell
I'd assume that anyone who hears about the book is going to learn that it's about risks from AI. Do you really think it comes down to the word "risk"? Borrowing Mike Anissimov's title, how about "Smarter than Us: On the Safety of Artificial Intelligence Research"?
0Eliezer Yudkowsky
'Safety' has much of the same problem, though not as much as 'risk'.
9John_Maxwell
Makes sense. Here are a few more ideas, tending towards a pop-sci feel. * Ethics for Robots: AI, Morality, and the Future of Humankind * Big Servant, Little Master: Anticipating Superhuman Artificial Intelligence * Friendly AI and Unfriendly AI * AI Morality: Why We Need It and Why It's Tough * AI Morality: A Hard Problem * The Mindspace of Artificial Intelligences * Strong AI: Danger and Opportunity * Software Minds: Perils and Possibilities of Human-Level AI * Like Bugs to Them: The Coming Rise of Super-Intelligent AI * From Cavemen to Google and Beyond: The Future of Intelligence on Earth * Super-Intelligent AI: Opportunities, Dangers, and Why It Could Come Sooner Than You Think
0daniel-1
I think Ethics for robots catches your attention(or at least it caught mine) but think some of the other subtitles you suggested go better with it Ethics for Robots: Perils and Possibilities of Super-Intelligent AI Ethics for Robots: A Hard Problem Although maybe you wouldn't want to associate AI and robots
0John_Maxwell
Yep, absolutely feel free to mix/match/modify my suggested titles.
0TheOtherDave
"Artificial Ethics"?
0somervta
Of these the first is the best, by a long shot.

An Alien Mind: The Risks of AI

[-]Shmi60

Needle in the AIstack :)

[-][anonymous]50
  • Preventing the Redundancy of the Human Race
  • What will you do when your smartphone doesn't need you anymore?
  • Humans vs Machines - a Battle we could not win

Important question - is this going to be a broad overview of AI risk in that it will cover different viewpoints (other than just MIRI's), a little like Responses to Catastrophic AGI Risk was, or is it to be more focused on the MIRI-esque view of things?

1lukeprog
Focused on the MIRI-esque view.

Risks of Artificial Intelligence

Or, adding a wee bit a of flair:

Parricide: Risks of Artificial Intelligence

Conceding the point to Eliezer:

Parricide and the Quest for Machine Intelligence

Can market research be done?

6gwern
Sure; you could compile a list from the comments and throw them into Google AdWords to see what maximizes clicks (the landing page would be something on intelligence.org). Anyone could do this - heck, I have $40 of AdWords credit I didn't realize I had, and could do it. But would this really be worthwhile, especially if people keep suggesting titles?
5John_Maxwell
Stuart could wait until activity in the thread dies out. If there's going to be a decent-sized push behind this book, I'd advocate doing market research.
2gwern
That resolves the second question, but not the big original one: if someone were to do an AdWords campaign as I've suggested, would Luke or the person in charge actually change the name of the book based on the results? What's the VoI here?
2John_Maxwell
I'd be surprised if they didn't read the results if you sent them, and I'd also be surprised if they didn't do Bayesian updates about the optimal book title based on the results. But you could always contact them.

Risky Machines: Artificial Intelligence as a Danger to Mankind

What is the target audience we are aiming to attract here?

  • AI: The Most Dangerous Game
  • What if God Were Imperfect?
  • Unlimited Power: The Threat of Superintelligence
  • The Threat of Our Future Selves
  • A True Identity/Existential Crisis
  • Refining Identity: The Dangers of AI
  • Maximum Possible Risk: Intelligence Beyond Recognition

All I have for now.

Finding perfect future through AI
Getting everything you want with AI
Good Future
The Perfect Servant
Programming a God

I think I'd like "machine intelligence" instead of "artificial intelligence" in the title, the latter pattern-matches to too many non-serious things.

So, after cousin_it or gjm: "Machine Intelligence as a Danger to Mankind" or, for a less doomsayer-ish vibe, "Risks of Machine Intelligence".

Safe at any Speed: Fundamental Challenges in the Development of Self-Improving Artificial Intelligence

3TheOtherDave
Be aware that "Safe at any Speed," while a marvelous summary of the correct attitude towards risk management to take here, is also the title of a moderately well-known Larry Niven short story.

"Primer" feels wrong. "A short introduction" would be more inviting, though there might be copyright issues with that. "AI-risk" is probably too much of an insider term.

I like cousin_it's direction http://lesswrong.com/r/discussion/lw/io3/help_us_name_a_short_primer_on_ai_risk/9rl6 - though would avoid anything that sounds like fear mongering.

3somervta
Something like "Risks from Artificial Intelligence" or "Risks from Advanced Artificial Intelligence" might help with this.

Deus ex Machina: The dangers of AI

2linkhyrule5
Deus est Machina? ... nah, too many religious overtones.
0Desrtopa
I'd been thinking it's been done, but apparently it's not already used by anything published. The top result is for a trope page.
0linkhyrule5
Right, that's where I got it from.

"Preventing Skynet"

(First thing that popped into my mind after I saw "Terminator versus the AI," before reading thread. May or may not be a good idea.)

Making the future: A Guide to AI Development

Where is this book supposed to fit in with Facing the Intelligence Explosion? I have a friend who I was thinking of sending Facing the Intelligence Explosion to; should I wait for this new book to come out?

AI: More than just a Creepy Spielberg Movie

  • The Indifferent God: The Promise and Peril of Machine Intelligence
  • The Arbitrary Mind: ^
  • The Parable of the Paperclip Maximizer

Flash Crash of the Universe : The Perils of designed general intelligence

The flash crash is a computer triggered event. The knowledgeable amongst us know about it. It indicates the kind of risks expected. Just my 2 cents.

My second thought is way more LW specific. Maybe it could be a chapter title.

You are made of atoms : The risks of not seeing the world from the viewpoint of an AI

It just occurred to me that we may be able to avoid the word "intelligence" entirely in the title. I was thinking of Cory Doctorrow on the coming war on general computation, where he explain unwanted behaviour on general purpose computers is basically impossible to stop. So:

Current computers are fully general hardware. An AI would be fully general software. We could also talk about general purpose computers vs general purpose programs.

The Idea is, many people already understand some risks associated with general purpose computers (if only for the... (read more)

[-][anonymous]00
  • Artificial Intelligence or Sincere Stupidity: Tomorrow's Choice.

  • You Can't Spell Fail Without AI.

  • AI-Yi-Yi! Peligro!

  • Better. Stronger. Faster.

  • Deus ex machina.

"How do we outsmart something designed to outsmart us?"

How Not To Be Killed By A Robot: Why superhuman intelligence poses a danger to humanity, and what to do about it.

0Stuart_Armstrong
Anything with "robot" brings up the Terminator and suggest entirely the wrong idea.
5Paul Crowley
Your reply to my other comment clarifies. OK scratch that :)