Part of the sequence: Rationality and Philosophy

In my last post, I showed that the brain does not encode concepts in terms of necessary and sufficient conditions. So, any philosophical practice which assumes this — as much of 20th century conceptual analysis seems to do — is misguided.

Next, I want to show that human abstract thought is pervaded by metaphor, and that this has implications for how we think about the nature of philosophical questions and philosophical answers. As Lakoff & Johnson (1999) write:

If we are going to ask philosophical questions, we have to remember that we are human... The fact that abstract thought is mostly metaphorical means that answers to philosophical questions have always been, and always will be, mostly metaphorical. In itself, that is neither good nor bad. It is simply a fact about the capacities of the human mind. But it has major consequences for every aspect of philosophy. Metaphorical thought is the principal tool that makes philosophical insight possible, and that constrains the forms that philosophy can take.

To understand how fundamental metaphor is to our thinking, we must remember that human cognition is embodied:

We have inherited from the Western philosophical tradition a theory of faculty psychology, in which we have a "faculty" of reason that is separate from and independent of what we do with our bodies. In particular, reason is seen as independent of perception and bodily movement...

The evidence from cognitive science shows that classical faculty psychology is wrong. There is no such fully autonomous faculty of reason separate from and independent of bodily capacities such as perception and movement. The evidence supports, instead, an evolutionary view, in which reason uses and grows out of such bodily capacities.

Consider, for example, the fact that as neural beings we must categorize things:

We are neural beings. Our brains each have 100 billion neurons and 100 trillion synaptic connections. It is common in the brain for information to be passed from one dense ensemble of neurons to another via a relatively sparse set of connections. Whenever this happens, the pattern of activation distributed over the first set of neurons is too great to be represented in a one-to-one manner in the sparse set of connections. Therefore, the sparse set of connections necessarily groups together certain input patterns in mapping them across to the output ensemble. Whenever a neural ensemble provides the same output with different inputs, there is neural categorization.

To take a concrete example, each human eye has 100 million light-sensing cells, but only about 1 million fibers leading to the brain. Each incoming image must therefore be reduced in complexity by a factor of 100. That is, information in each fiber constitutes a "categorization" of the information from about 100 cells.

Moreover, almost all our categorizations are determined by the unconscious associative mind — outside our control and even our awareness — as we interact with the world. As Lakoff & Johnson note, "Even when we think we are deliberately forming new categories, our unconscious categories enter into our choice of possible conscious categories."

And because our categories are shaped not by a transcendent, universal faculty of reason but by the components of our sensorimotor system that process our interaction with the world, our concepts and categories end up being largely sensorimotor concepts and categories.

Here are some examples of metaphorical thought shaped by the sensorimotor system:

Important Is Big
Example: "Tomorrow is a big day."
Mapping: From importance to size.
Experience: As a child, finding that big things (e.g. parents) are important and can exert major forces on you and dominate your visual experience.

Intimacy Is Closeness
Example: "We've been close for years, but we're beginning to drift apart."
Mapping: From intimacy to physical proximity.
Experience: Being physically close to people you are intimate with.

Difficulties Are Burdens
Example: "She's weighed down by her responsibilities."
Mapping: From difficulty to muscular exertion.
Experience: The discomfort or disabling effect of lifting or carrying heavy objects.

More Is Up
Example: "Prices are high."
Mapping: From quantity to vertical orientation.
Experience: Observing the rise and fall of levels of piles and fluids as more is added or subtracted.

Categories Are Containers
Example: "Are tomatoes in the fruit or vegetable category?"
Mapping: From kinds to spatial location.
Experience: Observing that things that go together tend to be in the same bounded region.

Linear Scales Are Paths
Example: "John's intelligence goes way beyond Bill's."
Mapping: From degree to motion in space.
Experience: Observing the amount of progress made by an object.

Organization Is Physical Structure
Example: "How do the pieces of this theory fit together?"
Mapping: From abstract relationships to experience with physical objects.
Experience: Interacting with complex objects and attending to their structure.

States Are Locations
Example: "I'm close to being in a depression and the next thing that goes wrong will send me over the edge.
Mapping: From a subjective state to being in a bounded region of space.
Experience: Experiencing a certain state as correlated with a certain location (e.g. being cool under a tree, feeling secure in a bed).

Purposes Are Destinations
Example: "He'll ultimately be successful, but he isn't there yet."
Mapping: From achieving a purpose to reaching a destination in space.
Experience: Reaching destinations throughout everyday life and thereby achieving purposes (e.g. if you want food, you have to go to the fridge).

Actions Are Motions
Example: "I'm moving right along on the project."
Mapping: From action to moving your body through space.
Experience: The common action of moving yourself through space, especially in the early years of life when that is to some degree the only kind of action you can take.

Understanding Is Grasping
Example: "I've never been able to grasp transfinite numbers."
Mapping: From comprehension to object manipulation.
Experience: Getting information about an object by grasping and manipulating it.

As a neural being interacting with the world, you can't help but build up such "primary" metaphors:

If you are a normal human being, you inevitably acquire an enormous range of primary metaphors just by going about the world constantly moving and perceiving. Whenever a domain of subjective experience or judgment is coactivated regularly with a sensorimotor domain, permanent neural connections are established via synaptic weight changes. Those connections, which you have unconsciously formed by the thousands, provide inferential structure and qualitative experience activated in the sensorimotor system to the subjective domains they are associated with. Our enormous metaphoric conceptual system is thus built up by a process of neural selection. Certain neural connections between the activated source- and target-domain networks are randomly established at first and then have their synaptic weights increased through their recurrent firing. The more times those connections are activated, the more the weights are increased, until permanent connections are forged.

Primary metaphors are combined to build complex metaphors. For example, Actions Are Motions and Purposes Are Destinations are often combined to form a new metaphor:

A Purposeful Life is a Journey
Example: "She seems lost, without direction. She's fallen off track. She needs to find her purpose and get moving again."

Can we think without metaphor, then? Yes. Our concepts of so-called "basic level" objects (that we interact with in everyday experience) are often literal, as are sensorimotor concepts. Our concepts of "tree" (the thing that grows in dirt), "grasp" (holding an object), and "in" (in the spatial sense) are all literal. But when it comes to abstract reasoning or subjective judgment, we tend to think in metaphor. We can't help it.

Implications for philosophical method

What happens when we fail to realize that our thinking is metaphorical? Let's consider a famous example: Zeno's paradox of the arrow.

Zeno described time as a sequence of points along a timeline. Now, consider an arrow in flight. At any point on the timeline, the arrow is at some particular fixed location. At a later point on the timeline, the arrow is at a different location. But since the arrow is located at a single fixed place at every point in time, then where is the motion?

Suppose, Zeno argues, that time really is a sequence of points constituting a time line. Consider the flight of an arrow. At any point in time, the arrow is at some fixed location. At a later point, it is at another fixed location. The flight of the arrow would be like the sequence of still frames that make up a movie. Since the arrow is located at a single fixed place at every time, where, asks Zeno, is the motion?

The puzzle arises when you take the metaphor of time as discrete points along the space of a timeline as being literal:

Zeno's brilliance was to concoct an example that forced a contradiction upon us: [a contradiction between] literal motion and motion metaphorically conceptualized as a sequence of fixed locations at fixed points in time.

Moral concepts as metaphors

For a more detailed illustration of the philosophical implications of metaphorical thought, let's examine the metaphors that ground our moral concepts:

Morality is fundamentally seen as the enhancing of well-being, especially of others. For this reason, ...basic folk theories of what constitutes fundamental well-being form the grounding for systems of moral metaphors around the world. For example, since most people find it better to have enough wealth to live comfortably than to be impoverished, we are not surprised to find that well-being is conceptualized as wealth...

We all conceptualize well-being as wealth. We understand an increase in well-being as a gain and a decrease of well-being as a loss or a cost. We speak of profiting from an experience, of having a rich life, of investing in happiness, and of wasting our lives... If you do something good for me, then I owe you something, I am in your debt. If I do something equally good for you, then I have repaid you and we are even. The books are balanced.

Well-Being Is Wealth is not the only metaphor behind our moral thinking. Here are a few others:

Being Moral Is Being Upright; Being Immoral Is Being Low; Evil Is a Force
Example: "He's an upstanding citizen. She's on the up and up. She's as upright as they come. That was a low thing to do. He's underhanded. I would never stoop to such a thing. She fell from grace. She succumbed to the floods of emotion and the fires of passion. She didn't have enough moral backbone to stand up to evil."

How does the metaphorical nature of our moral concepts constrain moral philosophy? Let us contrast a traditional view of moral concepts with the view of moral concepts emerging from cognitive science:

The traditional view of moral concepts and reasoning says the following: Human reasoning is compartmentalized, depending on what aspects of experience it is directed to. There are scientific judgments, technical judgments, prudential judgments, aesthetic judgments, and ethical judgments. For each type of judgment, there is a corresponding distinct type of literal concept. Therefore, there exists a unique set of concepts that pertain only to ethical issues. These ethical concepts are literal and must be understood only "in themselves" or by virtue of their relations to other purely ethical concepts. Moral rules and principles are made up from purely ethical concepts like these, concepts such as good, right, duty, justice, and freedom. We use our reason to apply these ethical concepts and rules to concrete, actual situations in order to decide how we ought to act in a given case.

… [But] there is no set of pure moral concepts that could be understood "in themselves" or "on their own terms." Instead, we understand morality via mappings of structures from other aspects and domains of our experience: wealth, balance, order, boundaries, light/dark, beauty, strength, and so on. If our moral concepts are metaphorical, then their structure and logic come primarily from the source domains that ground the metaphors. We are thus understanding morality by means of structures drawn from a broad range of dimensions of human experience, including domains that are never considered by the traditional view to be "ethical" domains. In other words, the constraints on our moral reasoning are mostly imported from other conceptual domains and aspects of experience...

An explosion of productivity in moral psychology since Lakoff & Johnson's book was published has confirmed these claims. The convergence of evidence suggests that multiple competing systems contribute to our moral reasoning, and they engage many processes not unique to moral reasoning.

Once again, knowledge of cognitive science constrains philosophy:

This view of moral concepts as metaphoric profoundly calls into question the idea of a "pure" moral reason... [Moreover,] we do not have a monolithic, homogeneous, consistent set of moral concepts. For example, we have different, inconsistent, metaphorical structurings of our notion of well-being, and these are employed in moral reasoning.

 

Next post: Intuitions Aren't Shared That Way

Previous post: Concepts Don't Work That Way

 

 

New Comment
80 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In my last post, I showed that the brain does not encode concepts in terms of necessary and sufficient conditions. So, any philosophical practice which assumes this — as much of 20th century conceptual analysis seems to do — is misguided.

This argument must be missing something crucial, because it fails to account for why the necessary-and-sufficient approach is so fantastically useful in mathematics. Mathematics deals with human concepts. Many of these concepts are very likely not stored in the brain as necessary and sufficient conditions. (Concepts learned in a formal setting might be stored that way, but there's little reason to think that a common concept like "triangle" is for most people.) And yet it proved incredibly fruitful to recast these concepts in terms of necessary and sufficient conditions.

In the case of mathematics, it turns out to be worthwhile to think about concepts in the decidedly unnatural mode of necessary and sufficient conditions. One might reasonably have hoped that the same admittedly unnatural mode would prove similarly worthwhile for concepts like "democracy". After all, unnatural doesn't necessarily mean worse. Now, for concepts like "democracy", the unnatural approach does prove to be worse. But it can't be simply because the approach was unnatural.

7TheOtherDave
The OP isn't saying that analysis based on necessary-and-sufficient conditions (hereafter nasc) is valueless, he's saying that philosophy that assumes people ordinarily categorize using nasc is misguided philosophy.

he's saying that philosophy that assumes people ordinarily categorize using nasc is misguided philosophy.

Yes, that looks like a more accurate reading than the one I'd made. But, in that case, I think that Luke is incorrect to say that "much of 20th century conceptual analysis seems to" make that assumption. Philosophers who do conceptual analysis aren't making that assumption any more than were the mathematicians who NASC-ified pre-formal concepts about quantity or geometry.

It is one thing to suppose that there exists, in some ideal sense, a list of predicates satisfied by all and only the things that you would call "knowledge". Conceptual analysts do hope to find such a list. They assume that such a list is "out there" to be found. But they don't necessarily assume that your brain, when it encounters a possible case of knowledge, actually runs down an explicit list of predicates and checks each against the given case before deciding to call it "knowledge". They might think that we would be better off if we had such a list and used it in that way, but they don't necessarily assume that that is how the brain does things now.

The distin... (read more)

1TheOtherDave
Agreed that modeling human knowledge as a NASC-list is different from asserting that human brains represent beliefs via NASC-lists. I am insufficiently versed in academic philosophy to be entitled to an opinion as to whether modern philosophers who do conceptual analysis are doing the former or the latter. Agreed that if modern philosophers as a rule don't actually do the latter, then arguing that they're misguided for doing so is at best a waste of time, at worst actively deceptive.
6lukeprog
Absolutely. But as you say, "there's little reason to think that a common concept like 'triangle' is [stored in the brain in necessary and sufficient conditions] for most people." The error happens when the philosopher thinks that defining "goodness" or "knowledge" as a set of necessary and sufficient conditions actually captures his pre-theoretic intuitive concept of "goodness" or "knowledge." Mathematicians, I hope, are not making that mistake. They are working with a cleanly defined formal system, and have no illusions that their pre-theoretic intuitive concept of "infinity" exactly matches the term's definition in their formal system.

The error happens when the philosopher thinks that defining "goodness" or "knowledge" as a set of necessary and sufficient conditions actually captures his pre-theoretic intuitive concept of "goodness" or "knowledge." Mathematicians, I hope, are not making that mistake. They are working with a cleanly defined formal system, and have no illusions that their pre-theoretic intuitive concept of "infinity" exactly matches the term's definition in their formal system.

(Emphasis added.)

No to "exactly matches", but yes to "actually captures", in the sense of "actually captures enough of". A typical mathematical definition of "infinite" is "A set S is infinite if and only if there exists a bijection between S and a proper subset of S." It's not a coincidence that the pre-formal and formal concepts of "infinity" are both called "infinity". The formal concept captures enough of the pre-formal concept to deserve the same name. One can use the formal concept in lots of the places where people were accustomed to using the pre-formal concept, with the bonus that the forma... (read more)

3lukeprog
I agree it's a matter of degree. I suspect both philosophers and mathematicians succeed and fail on this issue in a wide range of degrees.
4Tyrrell_McAllister
I'm not sure what you mean by "fail on this issue". Are you saying that mathematicians who wonder about the physical realizability of a Hilbert-Hotel type scenario using the NASC definition of infinity are committing the error of which you, Lakoff, and Johnson accuse conceptual-analysis style philosophy? (You probably don't mean that, but I'm giving you my best guess so that you can bounce off of it while clarifying your intended meaning.)
2lukeprog
Sorry, I'm not familiar enough with the studies of infinity to say.
5thomblake
Even if that's the case, it seems you should explain what you mean by "fail on this issue" without reference to "studies of infinity".
2Tyrrell_McAllister
As thomblake said, you should still be able to clarify what you meant by "fail on this issue". In particular, what is "this issue", and in what sense do you suspect that mathematicians "fail" on it to some degree? At any rate, my original example of a common mathematical concept was "triangle" (you brought up "infinity"). So perhaps you can make your point in terms of triangles instead of infinity.
2lukeprog
I'll try to clarify, but I'm very pressed for time. Let me try this first: Does this comment help clarify what I was trying to claim in my previous post?
4Tyrrell_McAllister
Your linked comment doesn't clarify for me what you meant by "mathematicians succeed and fail on this issue in a wide range of degrees." That said, the comment does give a meaning to the sentence "Conceptual analysis makes assumptions about how the mind works" that I can agree with, though it seems weird to me to express that meaning with that sentence. The "assumption" is that our categories coincide with "tidy" lists of properties. First, I would call this a "hope" rather than an "assumption". Seeking a tidy definition is reasonable if there is a high-enough probability that such a definition exists, even if that probability is well below 50%. Second, it seems strange to me to characterize this as an issue of "how the mind works". That's kind of like saying that I can't give a short-yet-complete description of the human stomach because of "how epigenesis works".
0Vladimir_Nesov
This whole topic looks like a good candidate for saying "oops" about, though settling the details would take more work. (Specifically, does someone on LW understand your point and can re-state it?)
4TheOtherDave
Well, since you asked. FWIW, I understood lukeprog's comment to mean that both philosophers and mathematicians sometimes mistake the formal theoretical constructs they work with professionally for the related informal cognitive structures that existed prior to the development of those constructs (e.g., the formal definition of a triangle, of infinity, of knowledge, etc.), but that the degree of error involved in such a mistake depends on how closely the informal cognitive structure resembles the formal theoretical construct, and that he's not familiar enough with the formal theoretical constructs of infinity to express an opinion about to what degree mathematicians who wonder about the physical realizability of a Hilbert-Hotel type scenario are making the same error. That said, I'm not especially confident of that interpretation. And if I'm right, it hardly merits all this meta-discussion.
4lukeprog
Right. Let me take the way you said this and run with it. Here's what I'm trying to say, which does indeed strike me as not worth all that much meta-discussion, and not requiring me to say "oops," since it sounds uncontroversial to my ears: Both philosophers and mathematicians sometimes mistake the formal theoretical constructs they work with professionally for the related informal cognitive structures that existed prior to the development of those constructs (e.g., the formal definition of a triangle, of infinity, of knowledge, etc.), but the degree of error involved in such a mistake depends on how closely the informal cognitive structure resembles the formal theoretical construct and what is specifically being claimed by the practitioner, and these factors vary from work to work, practitioner to practitioner.
5Vladimir_Nesov
(See also this comment.) Both (pre-theoretic) "informal cognitive structures" and "formal theoretical constructs" (clearly things of different kinds) are ways of working with (in many cases) the same ideas, with formal tools being an improvement over informal understanding in accessing the same thing. The tools are clearly different, and the answers they can get are clearly different, but their purpose could well be identical. To evaluate their relation to their purpose, we need to look "from the outside" at how the tools relate to the assumed purpose's properties, and we might judge suitability of various tools for reflecting a given idea even where they clearly can't precisely define it or even in principle compute some of its properties. This actually seems to be an important point to get right in thinking about FAI/metaethics. Human decision problem (friendliness content), the thing FAI needs to capture more formally, is (in my current understanding) something human minds can't explicitly represent, can't use definition of in their operation. Instead, they perform something only superficially similar.
1lukeprog
For the record, I agree with what this comment means to me when I read it. :)
2wedrifid
If that is what Luke is talking about then he's making a good point.
2Vladimir_Nesov
How do these things relate to each other in the context of Luke's statement (and which are relevant): mathematician's pre-theoretic idea of a triangle; formal definition of a triangle; mathematician's understanding of the formal definition; the triangle itself; mathematician's understanding of their understanding of the formal definition; mathematician's understanding of their pre-theoretic idea of a triangle; mathematician's understanding of a triangle? It seems to me that a very useful way of looking at what's going on is that both pre-theoretic understanding and understanding strengthened by having a formal definition are ways of understanding the idea itself, capturing it to different degrees (having it control mathematician's thought), with a formal definition being a tool for training the same kind of pre-theoretic understanding, just as a calculator serves to get to reliable answers faster. There is no clear-cut distinction, at least to the extent we are interested in actual answers to a question and not in actual answers that a given faulty calculator provides when asked that question (in which case we are focusing on a different question entirely, most likely a wrong one).
3TheOtherDave
In that context, my understanding of Luke's statement becomes "Mathematicians sometimes mistake B for A, but the degree of error involved in such a mistake depends on how closely B resembles A." C-G have nothing to do directly with the statement made, but personally I suspect that the relationship between F and E has a lot to do with the likelihood of mistaking B and A. I also think D is either overly vague or outright meaningless, and consequently G (assuming it exists) is either vague, complex or confused insofar as "triangle" in G means D. To say that a little differently: there are things in the world that humans would categorize as triangles in common speech, which might not conform to the necessary and sufficient conditions of a triangle used by mathematicians. For example, shapes made out of curved lines and shapes whose vertices are curves might not qualify as triangles in the mathematician's sense while still being triangles in the layman's sense. I do not agree that focusing attention on the question of what exactly is going on when laymen make that categorization is a wrong question, but I agree that it's not a question about geometry. I both agree and disagree that the layman's and the mathematician's understanding of "triangle" are both tools for understanding a single thing, which you've dubbed "the idea itself", depending on just what you mean by that phrase. In other words, I don't think the phrase "the idea itself" has a precise enough meaning to be useful in this sort of conversation. (This is also why I say that D and G are overly vague.)
-2Vladimir_Nesov
By (A) I mean mathematician's thoughts, and by (B) a syntactic definition. Since these things can't possibly be confused, you must have read something different in my words... On the other hand, what these things talk about, that is whatever mathematician is thinking of, and whatever a formal definition defines, could be confused (given one possible reading, though not one I intended). Do you mean the latter? You can use a formal definition to access an idea, but separately from how the mathematician in question uses it to access the same idea. Using whatever means you have, you look both at how someone understands an idea (i.e. what thoughts occur, that have cognitive as well as mathematical explanations), and at the idea, and see how accurately it's captured.
2TheOtherDave
Ah, sorry: I understood by B what it seems you actually meant by C. I should have said: "Mathematicians sometimes mistake C for A, but the degree of error involved in such a mistake depends on on how closely B resembles A." (The relationship between C and B is also a potential source of mathematicians' error, of course, but irrelevant to the mistake being described here.) === Re: your last paragraph... I certainly agree that for any X that exists outside of our minds, if I and a mathematician both construct separate understandings in our minds of X, it's in principle possible to compare those understandings to X to determine which of us is more accurate. So, for example, I agree that if that mathematician and I both look at this image: http://a3.mzstatic.com/us/r1000/042/Purple/25/f7/a3/mzl.xofowuov.480x480-75.jpg and she concludes it isn't a triangle (perhaps because the vertices are curved, perhaps because the bottom right vertex is missing, perhaps because the lines have nonzero thickness, perhaps for other reasons) while I conclude that it is, it follows that if a triangle (or an idea of a triangle, or a formal definition of a triangle) is a thing that exists outside of our minds then we can compare our different understandings to the actual triangle/idea/definition to determine which of us is more accurate. And I agree that it further follows that, as you say, to the extent that we are interested in actual answers to a question and not in actual answers that a given faulty calculator provides when asked that question (in which case we are focusing on a different question entirely, most likely a wrong one), to that extent we care about the more accurate of the two understandings more than the less accurate of them. That said, I am not certain how I would go about actually performing that comparison between my understanding and the actual formal definition of a triangle that exists outside of both my mind and the mathematician's, and I'm not convinced it ma
0bogus
Actually, this is rather dubious. Lakoff and Nuñez's work Where Mathematics Comes From includes an extensive case study of what they call the Basic Metaphor of Infinity, and they argue that transfinite numbers do not account for all uses of infinity. (And this is not even addressing the issue of potential vs. actual infinity, which is quite central to their analysis.)
0Tyrrell_McAllister
I think that nearly everyone would agree with that.
0bogus
Well, you didn't. You stated that the definition "A set S is infinite if and only if there exists a bijection between S and a proper subset of S." i.e. sets with transfinite cardinality accounts for the pre-formal concept of "infinity", which it doesn't. Lakoff and Núñez provide a cognitive, metaphor-based analysis which is much more comprehensive.
0Tyrrell_McAllister
That would indeed be a strange claim, so it is fortunate that I did not make it.
0antigonus
I haven't read their book, but an analysis of the pre-theoretic concept of the infinitude of a set needn't be taken as an analysis of the pre-theoretic concept of infinitude in general. "Unmarried man" doesn't define "bachelor" in "bachelor of the arts," but that doesn't mean it doesn't define it in ordinary contexts.
0bogus
Except that Lakoff and Núñez's pre-theoretic analysis does account for transfinite sets. There is a single pre-theoretic concept of infinity which accounts for a variety of formal definitions. This is unlike the word "bachelor" which is an ordinary word with multiple meanings.
1antigonus
I'm having trouble seeing your point in the context of the rest of the discussion. Tyrrell claimed that the pre-theoretic notion of an infinite set - more charitably, perhaps, the notion of an infinite cardinality - is captured by Dedekind's formal definition. Here, "capture" presumably means something like "behaves sufficiently similarly so as to preserve the most basic intuitive properties of." Your response appears to be that there is a good metaphorical analysis of infinitude that accounts for this pre-theoretic usage as well as some others simultaneously. And by "accounts for X," I take it you mean something like "acts as a cognitive equivalent, i.e., is the actual subject of mental computation when we think about X." What is this supposed to show? Does anyone really maintain that human brains are actually processing terms like "bijection" when they think intuitively about infinity?
4yttrium
Why?
4Tyrrell_McAllister
Are you asking "What causes it to be worse?" or "Why do you think that it is worse?"?
5dlthomas
Both seem to be interesting questions. Answer both?
3michaelsullivan
I'll take a stab at an explanation for the first, which will also shed some light on why I lean toward suspecting the second, but I'm not familiar enough with current academic philosophy to make such a conclusion in general. The main thing that math has going for it is a language that is very different from ordinary natural languages. Yes, terms from various natural languages are borrowed, and often given very specific mathematical definitions that don't (can't if they are to be precise) correspond exactly to ordinary senses of the terms. But the general language contains many obvious markers that say "this is not an ordinary english(or whatever) sentence" even when a mathematical proof contains english sentences. On the other hand, a philosophical treatise, reads like a book. A regular book in an ordinary natural language, language which we are accustomed to understanding in ways that include letting ambiguity and metaphor give it extra depths of meaning. Natural language just doesn't map to formalism well at all. Trying to discuss anything purely formal without using a very specific language which contains big bold markers of rigor and formalism (as math does) is very likely to lead to a bunch of category errors and other subtle reasoning problems.
0yttrium
Seconding, though I originally meant the second question. I am also not sure whether you are referring to "conceptual analysis" (then the second question would be clear to me) or "nailing down a proper (more proper) definition before arguing about something" (then it would not).
0atucker
I think that the necessary and sufficient conditions approach is great when you're making things which fit a definition or working with a set of already known necessary and sufficient conditions, but really terrible when dealing with concepts that your brain formed implicitly without your supervision. Basically, if I can build something, I might as well enforce a legible and easy to deal with set of sufficient and necessary conditions on it. Things are only in there because I put them there, and if I know the rules well enough then it should be easy to manipulate. Looking at things that my brain already thinks, I would be surprised if they were anywhere near that legible. Math will catch up eventually to describing it, but it's harder and less natural, and the mathematical descriptions of things that I think will look weird compared to what naive hierarchical data structures look like. (i.e. lots of things will be conditioned on things that I think should be irrelevant, but the halo effect happens.)
0Eugine_Nier
That's because "democracy" involves dealing with interacting human minds and said minds run on metaphor.
3Tyrrell_McAllister
Conceptual analysis also seems to fail in some cases that don't involve dealing with human minds. For example, a conceptual-analysis approach to the concept of "living thing" versus "non-living thing" would probably be a mistake.

I'm going to be pretty critical here.

Problem 1: Just-so stories. There are stories told about many of the metaphors based only on guesses, which result in occasional inaccuracy. For example, "stand up to something" comes from fistfights, not a general tendency to see up as good.

Problem 2: Commonplace facts, or facts leading nowhere. For example, in English we often use the metaphor of life as a journey. We know this already. What should we do with it?

If the parts exhibiting these problems were eliminated from the post as it is now, you'd basically be left with bits of the introduction, one example of a common metaphor, the bit on Zeno's arrow, and the last ten lines. Not very much. Something I'd like to see an example of is change in non-moral ideas resulting in a change in moral behavior that uses the non-moral ideas as a metaphor, and implications for the project of trying to isolate "human morality."

6Prismattic
I'm trying and failing to find the study through Google, but I recall reading research where this was actually quite class-dependent. The middle class and affluent tend to conceive of life as a journey; the working poor tend to conceive of life as a struggle.
6fortyeridania
From the context (especially the heading immediately preceding this metaphor), I thought "standing up to something" was an example of evil as a force, not of evil is low, good is high/up. I voted up this comment because of this well-specified call for evidence.
1torekp
Good request. I'd like to see some reasoning on the other end of the problem: assuming that it's commonplace to construct morality upon metaphors, does this show that moral reasoning must be metaphor-guided? If so, how?
[-][anonymous]70

What happens when we fail to realize that our thinking is metaphorical?

We also get Deep Wisdom platitudes, like Pico Iyer's statement in How to Live Forever that went something like 'a life is like a book, and a book needs an ending, so lives need to end too.'

It seems to be a prediction of this idea that the metaphors you listed should be found even in extremely disconnected cultural settings: find a jungle tribe uncontaminated by western civilization, and you should expect their word for "destination" to also mean "goal", and so forth. Is this the case?

Which brings me to my next point... where's my mountain of footnotes/citations???

6lukeprog
Not quite. Different cultures can make slightly different metaphors. For example, there is at least one tribe that uses the metaphor of time as being a space in front of and behind the speaker, but while we think of the past as behind us and the future being in front of us, they think of the past in front of them (because they can "see" it) and the future behind them (because they can't see it). I'm experimenting with a new style. I cite only three 'review' sources from the literature: or rather, I link directly to them in the text instead of writing references for them. Hundreds of studies are available if one checks those sources. This kind of post takes much less time to write, but may be less useful or impressive or something.

Less impressive, but about as useful.

5Bugmaster
Sorry if this is a n00b question, but are there any quantitative studies that catalogue such metaphors, and their prevalence among multiple cultures ? The reason I ask is because (as far as I can tell, which admittedly isn't very far) claims such as "all people think X", or "all people think of Y when they consider X" have a poor track record. As soon as the claim comes out, a bunch of people contribute counterexamples, and the claim is downgraded to "most people in a very specific demographic think X".
3Eugine_Nier
I believe this is true for nearly all pre-industrial societies, including pre-industrial (or at least pre-enlightenment) western culture. The two meanings of the word "before", which can mean either in front of (spatially) or behind (temporally), are a remnant of the older metaphor.
1k4ntico
FYI I think like them - does it mean I am not part of us? :) I regularly have disputes over these classical sequences of apish ancestors transforming into men because I place the more recent behind and following the less recent, while the dominant view is to have the modern man lead his ancestors ranked behind him most-recent-first.
4falenas108
I believe all quotes were from the book at the beginning, but it still doesn't feel like a lukeprog post without at least a page of citations at the end.
1Matt_Simpson
I half expect that the article is unfinished or that only the first part of it was posted. It did end somewhat abruptly.
1TheOtherDave
It doesn't seem to me that the OP predicts identical metaphorical categorization across all cultures/languages, but in either case you don't in fact find it. Actually, see Lakoff's Women, Fire, and Dangerous Things for a detailed exploration of metaphorical categorization in a relatively "uncontaminated" linguistic environment.
7Spurlock
Thanks for the recommendation. I didn't mean "identical" so much as "very similar". The vast majority of human cultures have experiences like "parents are big and important", "heavy lifting is burdensome", "bed is comfy, tree is shady" and the like. Since the underlying machinery doing the "necessary" categorizing is shared, it seems that these metaphors not being largely similar across cultures is indicative of culture itself playing a strong role in how we choose/use metaphors. I suppose it's a minor win for the theory so long as all cultures use some metaphors for abstract concepts (as opposed to specialized terms/jargon), but the post seems to argue for them stemming from universal sensorimotor experiences, so if these experiences are truly at the heart of the phenomenon, I would expect to see a lot of cross-cultural similarity.
2TheOtherDave
Culture itself most assuredly plays a strong role in how we choose/use metaphors. Looking for universal metaphors might be interesting. I'm reasonably confident that "warm = nurturing" across a wide range of cultures, for example, or "path = plan"; I am less (but still significantly) confident about "big" = "important", even less confident about "more = up", etc. If we broaden the thesis to include non-identical metaphors, my confidence increases wildly. For example, I'm extremely confident that every human culture has some metaphor for "plan" that involves a process for getting from an initial to a final state.

Steven Pinker's The Stuff of Thought goes into detail about how we think metaphorically.

So, any philosophical practice which assumes this — as much of 20th century conceptual analysis seems to do — is misguided.

It seems worth noting that while it's indeed true of much of 20th century conceptual analysis, I don't think it's even true of most, and it's been pretty widely accepted for some time that necessary and sufficient conditions don't get you all the way to a definition.

Suppose, Zeno argues, that time really is a sequence of points constituting a time line. Consider the flight of an arrow. At any point in time, the arrow is at some fixed location. At a later point, it is at another fixed location. The flight of the arrow would be like the sequence of still frames that make up a movie. Since the arrow is located at a single fixed place at every time, where, asks Zeno, is the motion?

I never understood the paradox here. Isn't the answer just the change from one frame to the next?

3Spurlock
I think of Zeno's paradoxes as trying to appeal to the essence of dissolved questions. Sort of like, having decomposed "does the tree make a sound?" into "does it produce vibrations" versus "does it cause auditory experiences", somebody comes along and says "but does it make a SOUND???", emphasizing the word "sound" to appeal to your intuitions and make you feel (incorrectly) that something in reality has yet to be resolved. Here, "motion" plays the part of "sound", after a faulty reduction of "motion" into "series of still-frames". But if I were to be that guy who comes along and says "sound" until you feel uncomfortable again, I would say "What's to say your arrow doesn't just teleport from one frame to the next, rather than 'move'? Can't your 'change from one frame to the next' also be broken down into a series of frames?" I wouldn't lose any sleep over it, since you're obviously right and anyone who denies motion is obviously wrong, but that's at least where certain hapless philosophers are coming from.
1billswift
I don't remember how to do it well enough to explain it in detail, but the root of the problem was that people didn't yet understand summing convergent series. For example, 1/2 + 1/4 + 1/8 + 1/16 + ... = 1. It is discussed in some books on philosophy of math, I remember coming across it several times; unfortunately, a quick check of the books I have available right now can't find a source.
2complexmeme
That's the solution to the Achilles and the Turtle Paradox (also Zeno's), but the Arrow Paradox (in the comment you replied to) is different. The Arrow Paradox is simply linguistic confusion, I think. Motion is a relation in space relative to different points of time, Zeno's statement that the (moving) arrow is at rest at any given instant is simply false (considered in relation to instants epsilon before or after that instant) or nonsensical (considered in enforced isolation with no information about any other instant). I never found the Arrow Paradox particularly compelling. For the Achilles and the Turtle Paradox I can at least see why someone might have found that confusing.
8dlthomas
Remember that these people were writing long before we had calculus. With regard to the Arrow Paradox in particular: if you're already comfortable with the notion of instantaneous rate of change, there is no paradox here. If you are not, and it seems sufficiently weird, then it may lead you to think there is.
3billswift
Oops, cached thought - I saw "Zeno's Paradox" and jumped to the most common one without reading the details.
0jsbennett86
The telling of this paradox I most remember says, "Between point A and point B, there are an infinite number of points through which the arrow must pass. So it must take the arrow an infinite amount of time to pass through those points. How can the arrow get from point A to point B?" This is the problem with mapping a mathematical metaphor onto reality: it doesn't always work. If the metaphor disagrees with the observation that the arrow does get from point A to point B, then it's not doing useful work. In fact, modern physics tells us there is a smallest possible length, the Planck length, which means there is not an infinite number of points through which the arrow must pass. Still, you don't need modern physics to defeat this paradox; you only need the ability to observe that the arrow does get from point A to point B.
4Bugmaster
I thought the problem with the paradox was that the math was wrong. Even if we assume that there's an infinite number of points between A and B, the more points we have, the less time the arrow would spend on each point, so if the number of point is infinite, the arrow would spend an infinitesimal amount of time at each point. As it turns out, you need to know about time series and limits (and maybe l'Hopital's rule) in order to correctly calculate the total flight time of the arrow (or, rather, to prove that it does not change even when the number of points is infinite), because infinity is not a number, and neither is 1 / infinity. Zeno did not know about these things, though.
-1jsbennett86
Yes, you're right. You can defeat the paradox on mathematical grounds, without having to appeal to physics. But Zeno could have defeated it on his own without using any math, simply by realizing that his metaphor was not paying rent.
1Bugmaster
I think ArisKatsaris (on the sibling comment) is right: Zeno's whole goal was to prove that physics doesn't work (ok, he didn't call it "physics", but still), so using physics to disprove his paradox would be nonsensical.
0ArisKatsaris
Zeno's argument was that movement was an illusion, that all was one -- that was the point of his paradoxes. The fact that things seemed to move, in combination with his paradox, proved (to him) that reality was an illusion.

There is a difference between the brain encoding concepts a certain way and concepts themselves being a certain way (or best studied at a certain level of abstraction, or best characterized in terms of necessary and sufficient conditions, etc.). Analogously, when I think of the number 2, I might associate it with certain typical memories, perceptions, other mathematical ideas, etc. etc. None of this has anything (well, almost anything) to do with the number 2 itself, but rather merely with my way of grasping it.

Concepts, like the number 2, are so-called “... (read more)

1Дмитрий Зеленский
Very cool argument; note though that: 1)L&J directly reject analytic philosophy; 2)Frege ended up in a contradiction - namely, Russel's paradox.

... I don't recognize myself in any of these. Probably something weird about my brain, but your choice of examples not being diverse enough might also be a factor.

I can't seem to figure out if I'm just exceptionally poor at noticing my use of metaphor, or I use different metaphors but with the same general principle due to being so excessively visually oriented, or I've trained them away by obsessing about using more accurate mathematical ones. Or maybe something I haven't though of.

Looking at your last page of comments, I see you mentioning an idea arising, being struck by an idea, your brain throwing rationalizations at an idea, and your inability to handle an idea. If that's typical, it would follow that you do use metaphor in your speech (no surprise; most people do) and your preferred metaphorical mode is kinesthetic, not visual.

2Armok_GoB
Thanks, this narrows it down to the first one I guess! Me sucking at noticing them.

Another important one: Height/Altitude is authority. Your boss is "above" you, the king, president or CEO is "at the top", you "climb the corporate ladder"

An interesting contribution to is this book by Hofstadter and Sanders

They explain thinking in terms of analogy, which as they use the term encompasses metaphor. This book is the a mature cognitive sciencey articulation of many of the fun and loose ideas that Hofstadter first explored in G.E.B.

I mean, I always disliked L&J's work. Perhaps because there is a tendency for overstatements overall, perhaps because it often gets dragged to grammatical categories as well, where the basis is much lower (while lexical Time is certainly often metaphorized as both Money and Space, grammatical tense is never Money (and its similarity to spatial relationships, when persists, is due to the concept of axis relevant for both)).

(Now, as linguists rarely agree on anything, there are certainly linguists (Croft 2001 "Radical Construction Grammar", for one) who claim that lexical/grammatical distinction is non-existent. I believe this is a no-go, but you might believe otherwise.)

If these metaphors are shaped by our sensorimotor systems, shouldn't we expect them to be similar across all cultures (and languages) ? Are they, in fact, similar across all cultures ? I am not a linguist or an anthropologist, so I can't really answer that question. I can identify a few metaphors that are different in Russian, but maybe those are outliers.

3lukeprog
Good question. See here.

Quantity is size

Example: "That's a large salary"

Mapping: From numbers and quantities to size.

Experience: Observing that multiple things occupy a larger volume than a single instance.

 

I hadn't previously appreciated just how deep this mapping is. Basically all language about numbers goes through size:  is a huge number,  is such a small portion. Fine, when comparing numbers you say "three is greater than two". But in Finnish we also say "three is bigger than two" and "five is smaller than seven", and "two plus two is as lar... (read more)

[+]k4ntico-130