This has been discussed in passing several times, but I thought it might be worthwhile to collect a list of recommended reading for new members and/or aspiring rationalists. There's probably going to be plenty of overlap with the SingInst reading list, but I think the purposes of the two are sufficiently distinct that a separate list is appropriate.
Some requests:
- A list of blog posts can be collected at another point in spacetime; for now, please stick to books, book sections, or essays1.
- Please post a single suggestion per comment, so upvoting can determine the final list for the eternal fame of wikihood.
- Please limit yourself to no more than 3-5 suggestions. We could probably all think of dozens, try and think what would actually be the best for the purposes of this site.
- Please only suggest an entry if you've read it. Judgement Under Uncertainty, while certain to make the list, should be put there by someone who has invested the time and waded through it (i.e. someone other than me).
- Please say why you're suggesting it. What did you learn from it? What is its specific relevance to rationality? (ETA)
Happy posting!
PS - Is there a "New Readers Start Here" page, or something similar (aside from "About")? I seem to remember someone talking about one, but I can't find it.
1"Everything Eliezer has ever written (since 2001)... twice!" while likely a highly beneficial suggestion for every single human being in existence, is not an acceptable entry. A Technical Explanation of Technical Explanation is fine. If you're not sure whether to classify something as "an essay" or "a blog post", there is a little-known trick to distinguish the two: essays contain small nuggets of vanadium ore, and blog posts contain shreds of palladium. Alternatively, just use your best judgement.
"Intelligence without Reason", "Intelligence without Representation", and "Elephants Don't Play Chess" by Rodney Brooks.
In my view Brooks made the most serious attempt to define a paradigm for AI research. Brooks decried the AI research of the 80s as being plagued by "puzzlitis" - researchers would cook up their own puzzles, and then invent AI systems to solve those problems (often not very well). But why are those problems (e.g. chess) important? Do they really advance our understanding of intelligence? What criterion can be used to decide if a theorem or algorithm is a contribution to AI? Is a string search algorithm a contribution to AI? What about a proof of the four-color theorem?
Brooks made the following bold suggestion: define the problems of relevance to AI to be those problems that real agents encounter in the real world. Thus, to do AI, one builds robots, puts them in the world, and observes the problems they encounter. Then one attempts to solve those real world problems.
Now, I consider this paradigm-proposal to be flawed in many ways. But at least it's something - it provides a clean definition, and a path by which normal science can proceed.
(A line from "The Big Lebowski" comes to mind: "Say what you will about the tenets of national socialism, Dude, at least it's an ethos!")