Constant comments on Less Wrong Rationality and Mainstream Philosophy - Less Wrong

106 Post author: lukeprog 20 March 2011 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (328)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 21 March 2011 06:10:42PM *  53 points [-]

Eliezer,

When I wrote the post I didn't know that what you meant by "reductionist-grade naturalistic cognitive philosophy" was only the very narrow thing of dissolving philosophical problems to cognitive algorithms. After all, most of the useful philosophy you've done on Less Wrong is not specifically related to that very particular thing... which again supports my point that mainstream philosophy has more to offer than dissolution-to-algorithm. (Unless you think most of your philosophical writing on Less Wrong is useless.)

Also, I don't disagree with your decision not to cover means and ends in CEV.

Anyway. Here are some useful contributions of mainstream philosophy:

  • Quine's naturalized epistemology. Epistemology is a branch of cognitive science: that's where recursive justification hits bottom, in the lens that sees its flaws.
  • Tarski on language and truth. One of Tarski's papers on truth recently ranked as the 4th most important philosophy paper of the 20th century by a survey of philosophers. Philosophers have much developed Tarski's account since then, of course.
  • Chalmers' formalization of Good's intelligence explosion argument. Good's 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good's argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has.
  • Dennett on belief in belief. Used regularly on Less Wrong.
  • Bratman on intention. Bratman's 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior. See, for example, pages 60-61 and 1041 of AIMA (3rd ed.).
  • Functionalism and multiple realizability. The philosophy of mind most natural to AI was introduced and developed by Putnam and Lewis in the 1960s, and more recently by Dennett.
  • Explaining the cognitive processes that generate our intuitions. Both Shafir (1998) and Talbot (2009) summarize and discuss as much as cognitive scientists know about the cognitive mechanisms that produce our intuitions, and use that data to explore which few intuitions might be trusted and which ones cannot - a conclusion that of course dissolves many philosophical problems generated from conflicts between intuitions. (This is the post I'm drafting, BTW.) Talbot describes the project of his philosophy dissertation for USC this way: "...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy. This has the potential to resolve some problems due to conflicting intuitions, since some of the conflicting intuitions may be shown to be unreliable and not to be taken seriously; it also has the potential to free some domains of philosophy from the burden of having to conform to our intuitions, a burden that has been too heavy to bear in many cases..." Sound familiar?
  • Pearl on causality. You acknowledge the breakthrough. While you're right that this is mostly a case of an AI researcher coming in from the outside to solve philosophical problems, Pearl did indeed make use of the existing research in mainstream philosophy (and AI, and statistics) in his book on causality.
  • Drescher's Good and Real. You've praised this book as well, which is the result of Drescher's studies under Dan Dennett at Tufts. And the final chapter is a formal defense of something like Kant's categorical imperative.
  • Dennett's "intentional stance." A useful concept in many contexts, for example here.
  • Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal's mugging. And the doomsday argument. And the simulation argument.
  • Ord on risks with low probabilities and high stakes. Here.
  • Deontic logic. The logic of actions that are permissible, forbidden, obligatory, etc. Not your approach to FAI, but will be useful in constraining the behavior of partially autonomous machines prior to superintelligence, for example in the world's first battlefield robots.
  • Reflective equilibrium. Reflective equilibrium is used in CEV. It was first articulated by Goodman (1965), then by Rawls (1971), and in more detail by Daniels (1996). See also the more computational discussion in Thagard (1988), ch. 7.
  • Experimental philosophy on the biases that infect our moral judgments. Experimental philosophers are now doing Kahneman & Tversky -ish work specific to biases that infect our moral judgments. Knobe, Nichols, Haidt, etc. See an overview in Experiments in Ethics.
  • Greene's work on moral judgment. Joshua Greene is a philosopher and neuroscientist at Harvard whose work using brain scanners and trolley problems (since 2001) is quite literally decoding the algorithms we use to arrive at moral judgments, and helping to dissolve the debate between deontologists and utilitarians (in his view, in favor of utilitarianism).
  • Dennett's Freedom Evolves. The entire book is devoted to explaining the evolutionary processes that produced the cognitive algorithms that produce the experience of free will and the actual kind of free will we do have.
  • Quinean naturalists showing intuitionist philosophers that they are full of shit. See for example, Schwitzgebel and Cushman demonstrating experimentally that moral philosophers have no special expertise in avoiding known biases. This is the kind of thing that brings people around to accepting those very basic starting points of Quinean naturalism as a first step toward doing useful work in philosophy.
  • Bishop & Trout on ameliorative psychology. Much of Less Wrong's writing is about how to use our awareness of cognitive biases to make better decisions and have a higher proportion of beliefs that are true. That is the exact subject of Bishop & Trout (2004), which they call "ameliorative psychology." The book reads like a long sequence of Less Wrong posts, and was the main source of my post on statistical prediction rules, which many people found valuable. And it came about two years before the first Eliezer post on Overcoming Bias. If you think that isn't useful stuff coming from mainstream philosophy, then you're saying a huge chunk of Less Wrong isn't useful.
  • Talbot on intuitionism about consciousness. Talbot (here) argues that intuitionist arguments about consciousness are illegitimate because of the cognitive process that produces them: "Recently, a number of philosophers have turned to folk intuitions about mental states for data about whether or not humans have qualia or phenomenal consciousness. [But] this is inappropriate. Folk judgments studied by these researchers are mostly likely generated by a certain cognitive system - System One - that will ignore qualia when making these judgments, even if qualia exist."
  • "The mechanism behind Gettier intuitions." This upcoming project of the Boulder philosophy department aims to unravel a central (misguided) topic of 20th century epistemology by examining the cognitive mechanisms that produce the debate. Dissolution to algorithm yet again. They have other similar projects ongoing, too.
  • Computational meta-ethics. I don't know if Lokhorst's paper in particular is useful to you, but I suspect that kind of thing will be, and Lokhorst's paper is only the beginning. Lokhorst is trying to implement a meta-ethical system computationally, and then actually testing what the results are.

Of course that's far from all there is, but it's a start.

...also, you occasionally stumble across some neato quotes, like Dennett saying "AI makes philosophy honest." :)

Note that useful insights come from unexpected places. Rawls was not a Quinean naturalist, but his concept of reflective equilibrium plays a central role in your plan for Friendly AI to save the world.

P.S. Predicate logic was removed from the original list for these reasons.

Comment author: [deleted] 21 March 2011 07:11:00PM 7 points [-]

It seems a shame to leave this list with several useful cites as a comment, where it is likely to be missed. Not sure what to suggest - maybe append it to the main article?

Comment author: lukeprog 21 March 2011 07:15:34PM 4 points [-]

I added a link to this list to the end of the original post.