there is a reason dolphins aren't fish
It may not be a very good reason. To quote Wikipedia:
Because the term "fish" is defined negatively, and excludes the tetrapods (i.e., the amphibians, reptiles, birds and mammals) which descend from within the same ancestry, it is paraphyletic, and is not considered a proper grouping in systematic biology. The traditional term pisces (also ichthyes) is considered a typological, but not a phylogenetic classification.
In other words, there are probably fish that are more distantly related to each other than one of them is to a dolphin (or you).
Philosophy in the Flesh, by George Lakoff and Mark Johnson, opens with a bang:
So what would happen if we dropped all philosophical methods that were developed when we had a Cartesian view of the mind and of reason, and instead invented philosophy anew given what we now know about the physical processes that produce human reasoning?
Philosophy is a diseased discipline, but good philosophy can (and must) be done. I'd like to explore how one can do good philosophy, in part by taking cognitive science seriously.
Conceptual Analysis
Let me begin with a quick, easy example of how cognitive science can inform our philosophical methodology. The example below shouldn’t surprise anyone who has read A Human’s Guide to Words, but it does illustrate how misguided thousands of philosophical works can be due to an ignorance of cognitive science.
Consider what may be the central method of 20th century analytic philosophy: conceptual analysis. In its standard form, conceptual analysis assumes (Ramsey 1992) the “classical view” of concepts, that a “concept C has definitional structure in that it is composed of simpler concepts that express necessary and sufficient conditions for falling under C.” For example, the concept bachelor has the constituents unmarried and man. Something falls under the concept bachelor if and only if it is an unmarried man.
Conceptual analysis, then, is the attempt to examine our intuitive concepts and arrive at definitions (in terms of necessary and sufficient conditions) that capture the meaning of those concepts. De Paul & Ramsey (1999) explain:
The practice continues even today. Consider the conceptual analysis of knowledge. For centuries, knowledge was considered by most to be justified true belief (JTB). If Susan believed X but X wasn’t true, then Susan couldn’t be said to have knowledge of X. Likewise, if X was true but Susan didn’t believe X, then she didn’t have knowledge of X. And if Susan believed X and X was true but Susan had no justification for believing X, then she didn’t really have “knowledge,” she just had an accidentally true belief. But if Susan had justified true belief of X, then she did have knowledge of X.
And then Gettier (1963) offered some famous counterexamples to this analysis of knowledge. Here is a later counterexample, summarized by Zagzebski (1994):
As in most counterexamples to the JTB analysis of knowledge, the counterexample to JTB arises due to “accidents” in the scenario:
A cottage industry sprung up around these “Gettier problems,” with philosophers proposing new sets of necessary and sufficient conditions for knowledge, and other philosophers raising counter-examples to them. Weatherson (2003) described this circus as “the analysis of knowledge merry-go-round.”
My purpose here is not to examine Gettier problems in particular, but merely to show that the construction of conceptual analyses in terms of necessary and sufficient conditions is mainstream philosophical practice, and has been for a long time.
Now, let me explain how cognitive science undermines this mainstream philosophical practice.
Concepts in the Brain
The problem is that the brain doesn’t store concepts in terms of necessary and sufficient conditions, so philosophers have been using their intuitions to search for something that isn’t there. No wonder philosophers have, for over a century, failed to produce a single, successful, non-trivial conceptual analysis (Fodor 1981; Mills 2008).
How do psychologists know the brain doesn’t work this way? Murphy (2002, p. 16) writes:
But before we get to Rosch, let’s look at a different experiment:
Category-membership for concepts in the human brain is not a yes/no affair, as the “necessary and sufficient conditions” approach of the classical view assumes. Instead, category membership is fuzzy.
Another problem for the classical view is raised by typicality effects:
So people agree that some items are more typical category members than others, but do these typicality effects manifest in normal cognition and behavior?
Yes, they do.
(If you want further evidence of typicality effects on cognition, see Murphy [2002] and Hampton [2008].)
The classical view of concepts, with its binary category membership, cannot explain typicality effects.
So the classical view of concepts must be rejected, along with any version of conceptual analysis that depends upon it. (If you doubt that many philosophers have done work dependent on the classical view of concepts, see here).
To be fair, quite a few philosophers have now given up on the classical view of concepts and the “necessary and sufficient conditions” approach to conceptual analysis. And of course there are other reasons that seeking definitions stipulated as necessary and sufficient conditions can be useful. But I wanted to begin with a clear and “settled” case of how cognitive science can undermine a particular philosophical practice and require that we ask and answer philosophical questions differently.
Philosophy by humans must respect the cognitive science of how humans reason.
Next post: Living Metaphorically
Previous post: When Intuitions Are Useful
References
Battig & Montague (1969). Category norms for verbal items in 56 categories: A replication and extension of the Connecticut category norms. Journal of Experimental Psychology Monograph, 80 (3, part 2).
Gettier (1963). Is justified true belief knowledge? Analysis, 23: 121-123.
De Paul & Ramsey (1999). Preface. In De Paul & Ramsey (eds.), Rethinking Intuition. Rowman & Littlefield.
Fodor (1981). The present status of the innateness controversy. In Fodor, Representations: Philosophical Essays on the Foundations of Cognitive Science. MIT Press.
Hampton (2008). Concepts in human adults. In Mareschal, Quinn, & Lea (eds.), The Making of Human Concepts (pp. 295-313). Oxford University Press.
McCloskey and Glucksberg (1978). Natural categories: Well defined or fuzzy sets? Memory & Cognition, 6: 462–472.
Mervis, Catlin & Rosch (1976). Categorization of natural objects. Annual Review of Psychology, 32: 89–115.
Mervis & Pani (1980). Acquisition of basic object categories. Cognitive Psychology, 12: 496–522.
Mills (2008). Are analytic philosophers shallow and stupid? The Journal of Philososphy, 105: 301-319.
Murphy (2002). The Big Book of Concepts. MIT Press.
Murphy & Brownell (1985). Category differentiation in object recognition: Typicality constraints on the basic category advantage. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11: 70–84.
Posner & Keele (1968). On the genesis of abstract ideas. Journal of Experimental Psychology, 77: 353–363.
Rips (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Verbal Behavior, 14: 665–681.
Ramsey (1992). Prototypes and conceptual analysis. Topoi 11: 59-70.
Rips, Shoben, & Smith (1973). Semantic distance and the verification of semantic relations. Journal of Verbal Learning and Verbal Behavior, 12: 1–20.
Rosch (1975). Cognitive representations of semantic categories. Journal of Experimental Psychology: General, 104: 192–233.
Rosch, Simpson, & Miller (1976). Structural bases of typicality effects. Journal of Experimental Psychology: Human Perception and Performance, 2: 491–502.
Smith, Balzano, & Walker (1978). Nominal, perceptual, and semantic codes in picture categorization. In Cotton & Klatzky (eds.), Semantic Factors in Cognition (pp. 137–168). Erlbaum.
Weatherson (2003). What good are counterexamples? Philosophical Studies, 115: 1-31.