Part of the sequence: Rationality and Philosophy
Philosophy is notable for the extent to which disagreements with respect to even those most basic questions persist among its most able practitioners, despite the fact that the arguments thought relevant to the disputed questions are typically well-known to all parties to the dispute.
The goal of philosophy is to uncover certain truths... [But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.
After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism.
Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1
Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make a moral judgment they aren't themselves motivated by, but scientists have reached consensus about such things.
In its dependence on masses of evidence and definitive experiments, science doesn't trust your rationality:
Science is built around the assumption that you're too stupid and self-deceiving to just use [probability theory]. After all, if it was that simple, we wouldn't need a social process of science... [Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.
Sometimes, you can answer philosophical questions with mountains of evidence, as with the example of moral motivation given above. But or many philosophical problems, overwhelming evidence simply isn't available. Or maybe you can't afford to wait a decade for definitive experiments to be done. Thus, "if you would rather not waste ten years trying to prove the wrong theory," or if you'd like to get the right answer without overwhelming evidence, "you'll need to [tackle] the vastly more difficult problem: listening to evidence that doesn't shout in your ear."
This is why philosophers need rationality training even more desperately than scientists do. Philosophy asks you to get the right answer without evidence that shouts in your ear. The less evidence you have, or the harder it is to interpret, the more rationality you need to get the right answer. (As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)
Because it tackles so many questions that can't be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn't: we generally are as "stupid and self-deceiving" as science assumes we are. We're "predictably irrational" and all that.
But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one's rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?
Alas, it doesn't seem so. I don't see much Kahneman & Tversky in philosophy syllabi — just light-weight "critical thinking" classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don't like. So what's really needed is regular habits training for genuine curiosity, motivated cognition mitigation, and so on.
(Imagine a world in which Frank Jackson's famous reversal on the knowledge argument wasn't news — because established philosophers changed their minds all the time. Imagine a world in which philosophers were fine-tuned enough to reach consensus on 10 bits of evidence rather than 1,000.)
We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT? Livengood et al. (2010) found, via an internet survey, that subjects with graduate-level philosophy training had a mean CRT score of 1.32. (The best possible score is 3.)
A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).
Moreover, several studies show that philosophers are just as prone to particular biases as laypeople (Schulz et al. 2011; Tobia et al. 2012), for example order effects in moral judgment (Schwitzgebel & Cushman 2012).
People are typically excited about the Center for Applied Rationality because it teaches thinking skills that can improve one's happiness and effectiveness. That excites me, too. But I hope that in the long run CFAR will also help produce better philosophers, because it looks to me like we need top-notch philosophical work to secure a desirable future for humanity.3
Next post: Train Philosophers with Pearl and Kahneman, not Plato and Kant
Previous post: Intuitions Aren't Shared That Way
Notes
1 Clearly, many philosophers have advanced versions of motivational internalism that are directly contradicted by these results from psychology. However, we don't know exactly which version of motivational internalism is defended by each survey participant who said they "accept" or "lean toward" motivational internalism. Perhaps many of them defend weakened versions of motivational internalism, such as those discussed in section 3.1 of May (forthcoming).
2 Mathematicians reach even stronger consensus than physicists, but they don't appeal to what is usually thought of as "mountains of evidence." What's going on, there? Mathematicians and philosophers almost always agree about whether a proof or an argument is valid, given a particular formal system. The difference is that a mathematician's premises consist in axioms and in theorems already strongly proven, whereas a philosopher's premises consist in substantive claims about the world for which the evidence given is often very weak (e.g. that philosopher's intuitions).
Well, this is very rapidly getting us into complex territory that future decision-theory posts will hopefully explore, but a very brief answer would be that I am unwilling to define anything fundamental in terms of do() operations because our universe does not contain any do() operations, and counterfactuals are not allowed to be part of our fundamental ontology because nothing counterfactual actually exists and no counterfactual universes are ever observed. There are quarks and electrons, or rather amplitude distributions over joint quark and lepton fields; but there is no do() in physics.
Causality seems to exist, in the sense that the universe seems completely causally structured - there is causality in physics. On a microscopic level where no "experiments" ever take place and there are no uncertainties, the microfuture is still related to the micropast with a neighborhood-structure whose laws would yield a continuous analogue of D-separation if we became uncertain of any variables.
Counterfactuals are human hypothetical constructs built on top of high-level models of this actually-existing causality. Experiments do not perform actual interventions and access alternate counterfactual universes hanging alongside our own, they just connect hopefully-Markov random numbers into a particular causal arrow.
Another way of saying this is that a high-level causal model is more powerful than a high-level statistical model because it can induct and describe switches, as causal processes, which behave as though switching arrows around, and yields predictions for this new case even when the settings of the switches haven't been observed before. This is a fancypants way of saying that a causal model lets you throw a bunch of rocks at trees, and then predict what happens when you throw rocks at a window for the first time.
Reading this whole thread, I'm interested to know what your thoughts on causality are. Do you have existing posts on the subject that I should re-read? I was under the impression you pretty much agreed with Pearl, but now that seems not to be the case.
By the way, Pearl certainly wasn't arguing from a "free will" perspective - rather, I think he'd agree with "there is no do() in physics" but disagree that "there is causality in physics".