MIRI will soon publish a short book by Stuart Armstrong on the topic of AI risk. The book is currently titled “AI-Risk Primer” by default, but we’re looking for something a little more catchy (just as we did for the upcoming Sequences ebook).
The book is meant to be accessible and avoids technical jargon. Here is the table of contents and a few snippets from the book, to give you an idea of the content and style:
- Terminator versus the AI
- Strength versus Intelligence
- What Is Intelligence? Can We Achieve It Artificially?
- How Powerful Could AIs Become?
- Talking to an Alien Mind
- Our Values Are Complex and Fragile
- What, Precisely, Do We Really (Really) Want?
- We Need to Get It All Exactly Right
- Listen to the Sound of Absent Experts
- A Summary
- That’s Where You Come In …
The Terminator is a creature from our primordial nightmares: tall, strong, aggressive, and nearly indestructible. We’re strongly primed to fear such a being—it resembles the lions, tigers, and bears that our ancestors so feared when they wandered alone on the savanna and tundra.
…
As a species, we humans haven’t achieved success through our natural armor plating, our claws, our razor-sharp teeth, or our poison-filled stingers. Though we have reasonably efficient bodies, it’s our brains that have made the difference. It’s through our social, cultural, and technological intelligence that we have raised ourselves to our current position.
…
Consider what would happen if an AI ever achieved the ability to function socially—to hold conversations with a reasonable facsimile of human fluency. For humans to increase their social skills, they need to go through painful trial and error processes, scrounge hints from more articulate individuals or from television, or try to hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. But it could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. Imagine a human who, every time they opened their mouth, had spent a solid year to ponder and research whether their response was going to be maximally effective. That is what a social AI would be like.
So, title suggestions?
Makes sense. Here are a few more ideas, tending towards a pop-sci feel.
Ethics for Robots: AI, Morality, and the Future of Humankind
Big Servant, Little Master: Anticipating Superhuman Artificial Intelligence
Friendly AI and Unfriendly AI
AI Morality: Why We Need It and Why It's Tough
AI Morality: A Hard Problem
The Mindspace of Artificial Intelligences
Strong AI: Danger and Opportunity
Software Minds: Perils and Possibilities of Human-Level AI
Like Bugs to Them: The Coming Rise of Super-Intelligent AI
From Cavemen to Google and Beyond: The Future of Intelligence on Earth
Super-Intelligent AI: Opportunities, Dangers, and Why It Could Come Sooner Than You Think
I think Ethics for robots catches your attention(or at least it caught mine) but think some of the other subtitles you suggested go better with it
Ethics for Robots: Perils and Possibilities of Super-Intelligent AI
Ethics for Robots: A Hard Problem
Although maybe you wouldn't want to associate AI and robots