Perhaps. But considering that we are talking about chapter 26 of a 27 chapter textbook, and that the authors spent 5 pages explaining the concept of "mechanism design" back in section 17.6, and also considering that every American student learns about the political concept of "checks and balances" back in high school, I'm going to stick with the theory that they either misunderstood Yudkowsky, or decided to disagree with him without calling attention to the fact.
ETA: Incidentally, if the authors are inserting their own opinion and disagreeing with Yudkowsky, I tend to agree with them. In my (not yet informed opinion), Eliezer dismisses the possibility of a multi-agent solution too quickly.
Eliezer dismisses the possibility of a multi-agent solution too quickly.
A multi-machine solution? Is that so very different from one machine with a different internal architecture?
AI: A Modern Approach is by far the dominant textbook in the field. It is used in 1200 universities, and is the 25th most-cited publication in computer science. If you're going to learn AI, this is how you learn it.
Luckily, the concepts of AGI and Friendly AI get pretty good treatment in the 3rd edition, released in 2009.
The Singularity is mentioned in the first chapter on page 12. Both AGI and Friendly AI are also mentioned in the first chapter, on page 27:
Chapter 26 is about the philosophy AI, and section 26.3 is "The Ethics and Risks of Developing Artificial Intelligence." They are:
Each of those sections is one or two paragraphs long. The final risk of AI takes up 3.5 pages: (6) The Success of AI might mean the end of the human race. Here's a snippet:
Then they mention Moravec, Kurzweil, and transhumanism, before returning to a more concerned tone about AI. They cover Asimov's three laws of robotics, and then:
It's good this work is getting such mainstream coverage!