I like "Smarter than Us: an overview of AI Risk". The first three words should knock the reader out of their comfort zone.
These suggestions lean towards sensationalism:
My model of people who are unaware of AI risk says that they will understand a title like "Artificial intelligence as a danger to mankind".
I don't have anything good but I think the sweet spot is something that kinda draws in people who'd be excited about mainstream worries about AI, but implies there's a twist.
I don't like all the clever-clever titles being proposed because (1) they probably restrict the audience and (2) one of the difficulties MIRI faces is persuading people to take the risk seriously in the first place -- which will not be helped by a title that's flippant, or science-fiction-y, or overblown, or just plain confusing.
You don't need "primer" or anything like it in the title; if the book has a fairly general title, and is short, and has a preface that begins "This book is an introduction to the risks posed by artificial intelligence" or something, you're done. (No harm in having something like "primer" or "introduction" in the title, if that turns out to make a good title.)
Spell out "artificial intelligence". (Or use some other broadly equivalent term.)
I would suggest simply "Risks of artificial intelligence" or maybe "Risks of machine intelligence" (matching MIRI's name).
Maybe:
I'm sorry Dave, I'm doing exactly what you asked me
(followed by a dull but informative "risks of artificial intelligence"-style subtitle)
"The Last Machine: Why Artificial Intelligence Might Just Wipe Us All Out"
It could include a few cartoons of robots destroying us all while saying things like:
"I do not hate you, but you are made of atoms I can use for something else."
"I am built to maximise human happiness, so unhappy people must die."
"Must...make...paperclips!"
"Muahahahaha! I will grant ALL your wishes!!!"
I strongly advocate eliminating the word 'risk' from the title. I have never spoken of 'AI risk'.
It is a defensive word and in a future-of-technology context it communicates to people that you are about to talk about possible threats that no amount of argument will talk you out of. Only people who like the 'risk' dogwhistle will read, and they probably won't like the content.
Important question - is this going to be a broad overview of AI risk in that it will cover different viewpoints (other than just MIRI's), a little like Responses to Catastrophic AGI Risk was, or is it to be more focused on the MIRI-esque view of things?
Risks of Artificial Intelligence
Or, adding a wee bit a of flair:
Parricide: Risks of Artificial Intelligence
Conceding the point to Eliezer:
Parricide and the Quest for Machine Intelligence
What is the target audience we are aiming to attract here?
All I have for now.
Finding perfect future through AI
Getting everything you want with AI
Good Future
The Perfect Servant
Programming a God
I think I'd like "machine intelligence" instead of "artificial intelligence" in the title, the latter pattern-matches to too many non-serious things.
So, after cousin_it or gjm: "Machine Intelligence as a Danger to Mankind" or, for a less doomsayer-ish vibe, "Risks of Machine Intelligence".
Safe at any Speed: Fundamental Challenges in the Development of Self-Improving Artificial Intelligence
"Primer" feels wrong. "A short introduction" would be more inviting, though there might be copyright issues with that. "AI-risk" is probably too much of an insider term.
I like cousin_it's direction http://lesswrong.com/r/discussion/lw/io3/help_us_name_a_short_primer_on_ai_risk/9rl6 - though would avoid anything that sounds like fear mongering.
"Preventing Skynet"
(First thing that popped into my mind after I saw "Terminator versus the AI," before reading thread. May or may not be a good idea.)
Where is this book supposed to fit in with Facing the Intelligence Explosion? I have a friend who I was thinking of sending Facing the Intelligence Explosion to; should I wait for this new book to come out?
Flash Crash of the Universe : The Perils of designed general intelligence
The flash crash is a computer triggered event. The knowledgeable amongst us know about it. It indicates the kind of risks expected. Just my 2 cents.
My second thought is way more LW specific. Maybe it could be a chapter title.
You are made of atoms : The risks of not seeing the world from the viewpoint of an AI
It just occurred to me that we may be able to avoid the word "intelligence" entirely in the title. I was thinking of Cory Doctorrow on the coming war on general computation, where he explain unwanted behaviour on general purpose computers is basically impossible to stop. So:
Current computers are fully general hardware. An AI would be fully general software. We could also talk about general purpose computers vs general purpose programs.
The Idea is, many people already understand some risks associated with general purpose computers (if only for the...
Artificial Intelligence or Sincere Stupidity: Tomorrow's Choice.
You Can't Spell Fail Without AI.
AI-Yi-Yi! Peligro!
Deus ex machina.
How Not To Be Killed By A Robot: Why superhuman intelligence poses a danger to humanity, and what to do about it.
MIRI will soon publish a short book by Stuart Armstrong on the topic of AI risk. The book is currently titled “AI-Risk Primer” by default, but we’re looking for something a little more catchy (just as we did for the upcoming Sequences ebook).
The book is meant to be accessible and avoids technical jargon. Here is the table of contents and a few snippets from the book, to give you an idea of the content and style:
So, title suggestions?