Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Daniel_Burfoot comments on Recommended reading for new rationalists - Less Wrong

27 Post author: XFrequentist 09 July 2009 07:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread.

Comment author: Daniel_Burfoot 11 July 2009 07:38:59PM *  2 points [-]

"The Structure of Scientific Revolutions" by TS. Kuhn.

Enormously influential book length essay about how science progresses. Kuhn describes the idea of "normal science" - the everyday activities by which scientists take incremental steps forward. For normal science to be fruitful, it must be carried out within the context of a paradigm - a theoretical framework and set of shared commitments held by a scientific community. If no paradigm exists, or if the current paradigm is flawed, the incremental steps add up to nothing and no progress is made.

This idea is important for anyone interested in doing AI research. AI is a field without a paradigm: hundreds of papers are published every year, but do little to advance our understanding. Every serious AI researcher must confront the deep conceptual problems of the field; he must begin by articulating his own paradigm. There is little point in continuing along in the same style of research carried out by our predecessors: it leads only to esoteric branches of applied mathematics and engineering projects of questionable utility.

Comment author: Daniel_Burfoot 11 July 2009 07:57:30PM *  1 point [-]

"Intelligence without Reason", "Intelligence without Representation", and "Elephants Don't Play Chess" by Rodney Brooks.

In my view Brooks made the most serious attempt to define a paradigm for AI research. Brooks decried the AI research of the 80s as being plagued by "puzzlitis" - researchers would cook up their own puzzles, and then invent AI systems to solve those problems (often not very well). But why are those problems (e.g. chess) important? Do they really advance our understanding of intelligence? What criterion can be used to decide if a theorem or algorithm is a contribution to AI? Is a string search algorithm a contribution to AI? What about a proof of the four-color theorem?

Brooks made the following bold suggestion: define the problems of relevance to AI to be those problems that real agents encounter in the real world. Thus, to do AI, one builds robots, puts them in the world, and observes the problems they encounter. Then one attempts to solve those real world problems.

Now, I consider this paradigm-proposal to be flawed in many ways. But at least it's something - it provides a clean definition, and a path by which normal science can proceed.

(A line from "The Big Lebowski" comes to mind: "Say what you will about the tenets of national socialism, Dude, at least it's an ethos!")