We're pleased to announce the release of "Smarter Than Us: The Rise of Machine Intelligence", commissioned by MIRI and written by Oxford University’s Stuart Armstrong, and available in EPUB, MOBI, PDF, and from the Amazon and Apple ebook stores.
What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? This new book navigates these questions with clarity and wit.
Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.
AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
Special thanks to all those at the FHI, MIRI and Less Wrong who helped with this work, and those who voted on the name!
Sure, but I'm not even sure at this stage that publishing a paperback version with CreateSpace is a better use of 2 hours of Alex's time than the other stuff he's doing. Are there hidden gotchas which make publishing worse than not-publishing even if it was totally free? (I've encountered many examples of this while running MIRI.) Will it actually take 5 hours of time rather than 2? I don't know the answers to these questions, and this isn't a priority. Deciding whether to publish a paperback copy of Smarter Than Us is, like, the 20th most important decision I'll make this week. I'm not even sure that explaining all the different considerations I'm weighing for such a minor decision is worth the time I've spent typing these sentences. Anyway, I don't mean to be rude and I understand why you and Alicorn are engaging me about this, it's just that the decision is more complicated and less important (relative to all the invisible-to-LWers things we're doing) than you might realize, and I don't have time to explain it all. Again: if somebody can save us time on the initial research to figure out what's a good idea, it might become competitive with the other things Alex is doing with his MIRI time.
I'm not clear on what Alex in particular has to do with this. Aren't there people with lower opportunity cost you could go "hey, investigate self-publishing options" to? They are marketed to publishing-non-experts and while they don't require zero skill, perhaps it doesn't call for your scarcest and thinnest-spread people. Are you sure you don't want to ask me any questions about my experience self-publishing with Createspace...?