Eliezer Yudkowsky's original Sequences have been edited, reordered, and converted into an ebook!
Rationality: From AI to Zombies is now available in PDF, EPUB, and MOBI versions on intelligence.org (link). You can choose your own price to pay for it (minimum $0.00), or buy it for $4.99 from Amazon (link). The contents are:
- 333 essays from Eliezer's 2006-2009 writings on Overcoming Bias and Less Wrong, including 58 posts that were not originally included in a named sequence.
- 5 supplemental essays from yudkowsky.net, written between 2003 and 2008.
- 6 new introductions by me, spaced throughout the book, plus a short preface by Eliezer.
The ebook's release has been timed to coincide with the end of Eliezer's other well-known introduction to rationality, Harry Potter and the Methods of Rationality. The two share many similar themes, and although Rationality: From AI to Zombies is (mostly) nonfiction, it is decidedly unconventional nonfiction, freely drifting in style from cryptic allegory to personal vignette to impassioned manifesto.
The 333 posts have been reorganized into twenty-six sequences, lettered A through Z. In order, these are titled:
- A — Predictably Wrong
- B — Fake Beliefs
- C — Noticing Confusion
- D — Mysterious Answers
- E — Overly Convenient Excuses
- F — Politics and Rationality
- G — Against Rationalization
- H — Against Doublethink
- I — Seeing with Fresh Eyes
- J — Death Spirals
- K — Letting Go
- L — The Simple Math of Evolution
- M — Fragile Purposes
- N — A Human's Guide to Words
- O — Lawful Truth
- P — Reductionism 101
- Q — Joy in the Merely Real
- R — Physicalism 201
- S — Quantum Physics and Many Worlds
- T — Science and Rationality
- U — Fake Preferences
- V — Value Theory
- W — Quantified Humanism
- X — Yudkowsky's Coming of Age
- Y — Challenging the Difficult
- Z — The Craft and the Community
Several sequences and posts have been renamed, so you'll need to consult the ebook's table of contents to spot all the correspondences. Four of these sequences (marked in bold) are almost completely new. They were written at the same time as Eliezer's other Overcoming Bias posts, but were never ordered or grouped together. Some of the others (A, C, L, S, V, Y, Z) have been substantially expanded, shrunk, or rearranged, but are still based largely on old content from the Sequences.
One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn't want to read the entire blog archive chronologically. Despite being called "sequences," their structure looked more like a complicated, looping web than like a line. With Rationality: From AI to Zombies, it will still be possible to hop back and forth between different parts of the book, but this will no longer be required for basic comprehension. The contents have been reviewed for consistency and in-context continuity, so that they can genuinely be read in sequence. You can simply read the book as a book.
I have also created a community-edited Glossary for Rationality: From AI to Zombies. You're invited to improve on the definitions and explanations there, and add new ones if you think of any while reading. When we release print versions of the ebook (as a six-volume set), a future version of the Glossary will probably be included.
I liked Robby's introduction to the book overall, but I find it somewhat ironic that right after the prologue where Eliezer mentions that one of his biggest mistakes in writing the Sequences was focusing on abstract philosophical problems that are removed from people's daily problems, the introduction begins with
The first (though not necessarily best) example of how to rewrite this in less abstract form that comes to mind would be something like "Imagine that you're standing by the entrance of a university whose students are seven tenths female and three tenths male, and observing ten students go in..."; with the biased example being "On the other hand, suppose that you happen to be standing by the entrance of the physics department, which is mostly male even though the university in general is mostly female."
Some unnecessary technical jargon that could have been gotten rid of also caught my eye in the first actual post: e.g. "Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function" could have been rewritten to be more broadly understandable, e.g. "rational agents make decisions that are the most likely to produce the kinds of outcomes they'd like to see".
I could spend some time making notes of these kinds of things and offering suggested rewrites for making the printed book more broadly accessible - would MIRI be interested in that, or would they prefer to keep the content as is?
Part of the idea behind the introduction is to replace an early series of posts: "Statistical Bias", "Inductive Bias", and Priors as Mathematical Objects. These get alluded to various times later in the sequences, and the posts 'An Especially Elegant Evolutionary Psychology Project', 'Where Recursive Justification Hits Bottom', and 'No Universally Compelling Arguments' all call back to the urn example. That said, I do think a more interesting example (whether or not it's more 'ordinary' and everyday) would be a better note to start the ... (read more)