Yvain comments on Concepts Don't Work That Way - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (88)
I always find it a red flag when it seems like an entire group of highly-educated people is doing something ridiculously stupid. If assuming the brain thinks in terms of necessary-and-sufficient would be really stupid, maybe that's not what conceptual analysts are doing.
The idea that our brain's fuzzy type-1 thinking can be translated into precise type-2 thinking is one of the foundations of science and mathematics, not to mention philosophy. I'd been drawing and seeing circles for years as a child before I learned that they were the 2-D set of points equidistant from a center point, but this latter definition accurately captures a necessary and sufficient condition for circles. Anyone who says "your brain doesn't really process circles based on that definition, it's just pattern-matching other circles you've seen" would be missing the point.
And this process sometimes works even with natural categories. Wikipedia defines "birds" as "feathered, winged, bipedal, endothermic (warm-blooded), egg-laying, vertebrate animals", and as far as I can tell, this is necessary and sufficient for birds (some sources say kiwis are wingless, but others say they have small, vestigial wings). Although birds are the classic example of a fuzzy mental category, like the "circle" category it turns out to have a pretty good necessary-and-sufficient definition after all. Likewise, "fish" are "all gill-bearing aquatic vertebrate animals that lack limbs with digits".
Even if the above turn out to be incorrect (we find an animal we intuitively classify as a fish that does not meet that definition), it's still interesting and potentially useful to have a prediction rule that works 99+% of the time. And someone who started out believing whales to be fish could, armed with such a 99%-rule, correct her error.
So even if our brains don't naturally think in terms of necessary-and-sufficient, it's not immediately obvious that it's stupid and impossible to try to come up with necessary-and-sufficient conditions for our categories.
There may be some sets of borders in thingspace which are better than others, in the same way that there are some borders for an independent Palestinian state that are better than others (even though we're not sure exactly where the border should be, sticking Tel Aviv in Palestine, or Ramallah in Israel, would be a mistake).
Fixing borders in thingspace can determine the status of edge cases. Sometimes this can even be useful; for example, if I am allergic to fish, then having a correct boundary for "fish" will let me know I can safely eat whale meat. I may not know exactly what chemical in fish causes my allergic reaction, or even know that allergy is an immune reaction to specific chemicals - but being able to draw the category boundaries accurately will "miraculously" predict that whale meat will not trigger my allergy.
Or more realistically, coming up with a set of criteria for "good" will help me determine that stoning homosexuals is bad, even if I previously didn't realize this. And philosophers have come up with some pretty good definitions for "good" that can do this - not universally accepted, by any means, but useful to those who know them.
I think you have a strong case against conceptual analysis, and some very intelligent commentary on where conceptual analysis does work - but it's all in Conceptual Analysis and Moral Theory. This post seems to be attacking a straw man by accusing conceptual analysts of necessarily trying to model the human brain and then doing a bad job of it.
I haven't claimed this, and in fact have specifically denied it. But it is apparently a common reading of my post, so I've added a sentence toward the end to make this clear. Sorry about that.
I think it is, in many cases. Maybe the clearest argument for this is from Ramsey (1992). I'll quote an extended passage below, though you may want to skip to the part that reads: "At first blush, it might seem a little odd to suppose that conceptual analysis involves any presuppositions about the way our minds work..."
BTW, Sandin (2006) makes the (correct) reply to Ramsey that seeking (stipulated) necessary-and-sufficient-conditions definitions for concepts can be useful even if Ramsey is right that the classical view of concepts is wrong:
Also, I admit there are philosophers who disagree with me about what philosophers have been doing all along. See, for example, Nimtz (2009):
However, even this statement admits that conceptual analysis grounded in the Socratic analysanda is doomed. There's been an awful lot of that since Socrates.
Moreover, while I agree that conceptual analysis seeking application conditions for our terms can succeed, this is not the most common notion of what a 'concept' is according to 20th century analytic philosophy. The standard notion of what a concept is - the thing being analyzed - is that it is a kind of mental representation. The problem, then, is that mental representations do not occur in neat bundles of necessary and sufficient conditions.
McBain (2008) recognizes that both sorts of conceptual analysis go on. He calls 'seeking concepts out there' approach "robust conceptual analysis" and the 'seeking concepts in our head' approach "modest conceptual analysis."
He notes that a third form of conceptual analysis may be the dominant one today: "reflective equilibrium." That will be the topic of another post of mine.