We're pleased to announce the release of "Smarter Than Us: The Rise of Machine Intelligence", commissioned by MIRI and written by Oxford University’s Stuart Armstrong, and available in EPUB, MOBI, PDF, and from the Amazon and Apple ebook stores.
What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? This new book navigates these questions with clarity and wit.
Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.
AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
Special thanks to all those at the FHI, MIRI and Less Wrong who helped with this work, and those who voted on the name!
This sounds like bad instrumental rationality. If your current option is "don't publish it in paperback at all", and you are presented with an option you would be willing to take, publishing at a certain quality, if that quality was the best quality, then the fact that there may be better options you haven't explored should never return your "best choice to make" to "don't publish it in paperback at all." Your only viable candidates should be: "Publish using a suboptimal option" and "Do verified research about what is the best option and then do that."
As they say, "The perfect is the enemy of the good."
Sure, but I'm not even sure at this stage that publishing a paperback version with CreateSpace is a better use of 2 hours of Alex's time than the other stuff he's doing. Are there hidden gotchas which make publishing worse than not-publishing even if it was totally free? (I've encountered many examples of this while running MIRI.) Will it actually take 5 hours of time rather than 2? I don't know the answers to these questions, and this isn't a priority. Deciding whether to publish a paperback copy of Smarter Than Us is, like, the 20th most important decisi... (read more)