[I'd put this in an open thread, but those don’t seem to happen these days, and while this is a quote it isn't a Rationality Quote.]

You know, one of the really weird things about us human beings […] is that we have somehow created for ourselves languages that are just a bit too flexible and expressive for our brains to handle. We have managed to build languages in which arbitrarily deep nesting of negation and quantification is possible, when we ourselves have major difficulties handling the semantics of anything beyond about depth 1 or 2. That is so weird. But that's how we are: semantic over-achievers, trying to use languages that are quite a bit beyond our intellectual powers.

Geoffrey K. Pullum, Language Log, “Never fails: semantic over-achievers”, December 1, 2011

This seems like it might lead to something interesting to say about the design of minds and the usefulness of generalization/abstraction, or perhaps just a good sound bite.

New Comment
27 comments, sorted by Click to highlight new comments since:

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that at a glance seem to make rough syntactic sense actually has semantics behind it. A lot of theology and the bad ends of philosophy have this problem. Even math has run into this issue. Until limits were defined rigorously in the mid 19th century there was disagreement over what the limit of 1 -1 + 1 -1 +1 -1 +1... was. Is it is 1 because one can group it as 1 + (-1 +1) + (-1+1)... or maybe it is zero since one can write it as (1-1) + (1-1) + (1-1)...? This did however lead to good math and other notions of limits including the entire area of what would later be called Tauberian theorems.

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that looks at a glance to make rough syntactic sense that it actually has semantics behind it.

This sentence is so convoluted that at first I thought it was some kind of meta joke.

Well, the extra "that" before "that it actually" really doesn't help matters. I've tried to make it slightly better but it still seems to be a bit convoluted.

[-][anonymous]30

Thiss?

There's a related problem: Once they have terms for something, humans have a tendency to take for granted that anything that appears to make superficial syntactic sense actually has semantics behind it.

Or just use a bunch of commas?

There's a related problem; Humans have a tendency, once they have terms for something, to take for granted that something that looks, at a glance, to make rough syntactic sense actually has semantics behind it.

The punctuation, it's beautiful!

I'm a little relieved to find that, when i first read the grandparent comment, i was able to parse it the same way as you have in your clarification.

Yes! So much better.

I have nothing against splitting infinitives, but "to once they have terms for something take for granted" is pretty extreme. It's likely to overflow the reader's stack. After fixing that, running an iteration of the "omit needless words" algorithm, and doing a bit of rephrasing, here's what I came up with:

There's a related problem: If they have terms for something, humans tend to think things that make syntactic sense actually have semantics behind them.

(Ninja edit: Some more needless words omitted, including a nominalization.)

(Edit 2: Here's a better nominalization link because it gives examples of when to use nominalizations, not just when not to use them.)

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that looks at a glance to make rough syntactic sense that it actually has semantics behind it.

Isn't this the same issue we see with surface analogies and cached thoughts?

I'm not sure. Cached thoughts generally make semantic sense. So I'm not sure this is the same thing. The surface analogy issue does seem closer though.

Or it might lead to formalizing how many layers of negation (I don't know what he means by quantification) it's safe to use, not to mention thinking about why one is using multiple layers of negation.

Why say "never fails to disappoint" if what you mean "is reliably excellent"?

A related question-- sometimes layers of negation are necessary to describe complex systems like biology or government-- something starts to happen, but something else limits it, and then another part of the system steps in to try to keep the limits from being overdone, and so on. I'm apt to lose track-- what's apt to help?

Why say "never fails to disappoint" if what you mean "is reliably excellent"?

You probably meant something more like 'never fails to excite' or some antonym of 'disappoint'. Perhaps a good example of using too many layers of negation causing confusion.

Nancy was quoting the review given as an example of shooting yourself in the foot with too many layers in the linked Language Log post. The author of this review meant "is reliably excellent" and wrote "never fails to disappoint".

D'oh!. If I'd read the linked content first, I'd have understood the context that was being quoted there.

When I see something like "a referendum to overturn a law repealing a ban on X" and get confused, one thing I do is count the negations. In my example there are three, so people who support the referendum are against X and vice versa. Even if there are nuances that simple negation-counting misses (like "always fails to verb" vs. "doesn't always verb successfully", which both have one), that gives me a basic framework that then lets me add the nuances back in without getting confused.

[-]TimS00

I read the point as saying that language is capable of greater depth than humans. Das nicht nichtet is a coherent statement despite the objection of the logical positivists, but it is really deep.

As an aside, I'm not persuaded that metaphysicians are saying anything useful. But the objection that their statements were incoherent is a stronger objection.

The interpretation of some of Heidegger's statements as incoherent isn't just something his enemies came up with; it is supported by other statements of Heidegger's (as Carnap notes in his criticism of Heidegger). I really am curious as to what coherent statement you think you can find in "the nothing itself nots."

[-]TimS30

Ok, so the statement is made as part of a mission to say something intelligent about noumenon. In other words, Heidegger is trying to say something about what things are, totally independent of our perception of them. As I alluded above, I think trying to grapple with perception-independent-thingness is . . . not a good use of one's time.

Anyway, Heidegger does lots of deep thinking about this problem, and ultimately says that there is "Nothing" as the basic characteristic of objects. To me, that's a plausible response to it's turtles all the way down. At this point, Heidegger needs to explain how to get back from this to objects as we experience them. The answer is that the "Nothing" nothings. To me, that's like saying the "Nothing" verbs. There's no other word we could use, because (by hypothesis) all there is . . . is Nothing. If you pull in something else to act on Nothing, then it's the problem of Cain's wife all over again.

That's quite counter-intuitive. But so is the assertion that there is a set that contains only the set that contains no elements. Or worse, the set that contains (the set that contains only the set that contains no elements) AND the set that contains no elements.

So, Heidegger may be wasting his time (my view). He said something quite counter-intuitive. It could easily be wrong. But I assert that it is not incoherent. That is, it makes an assertion with some content.

Could this be something similar to the principle that any language capable of doing a short list of things is Turing-Complete and can represent anything any other Turing-Complete language can do? That is, might it be that preventing our language from having more potential than we can use requires extra, arbitrary restrictions?

EDIT: I meant to also say, "and to work for the purposes we need it for, human language has to be able to do all those things."

You don't necessarily add restrictions to a language to stop it from being Turing-Complete, you can just not give it the necessary axioms or whatever. I mean, in a regular language, there's no rule saying 'you can use all these regexps and atoms unless you're using them like this, because that would be Turing-complete'.

For a human example, look at the reports about the Piraha language. It's not that they ban recursion out of superstitious dread of the infinite or something - it's apparently that they simply don't understand it/use it.

You should be extremely careful when citing Piraha for anything, because it's highly controversial and the evidence is scant either way. For example, Everett found a phoneme in their sound system on his second or third trip that hadn't been there on the previous ones - apparently the tribe doesn't use that sound when talking to outsiders, because the other local tribes think it sounds silly. This isn't to say that they're hiding recursion from outsiders, only that there hasn't been enough fieldwork done with them to say anything with a high degree of certainty.

True enough. But on the strength of what we currently know about the Piraha, would you agree that they don't have rules which specifically ban or suppress or mention recursion in order not to use it? (That is, if the reports about the Piraha turn out to be true, would Normal_Anomaly’s argument be correct if we extend it to the Piraha language, that removing recursion/Turing-completeness "requires extra, arbitrary restrictions"?)

would you agree that they don't have rules which specifically ban or suppress or mention recursion in order not to use it?

Yes, absolutely. I don't even need to go past English to find structures that we don't use for no apparent reason (ie. the structure will get marked as 'odd' but not flat-out wrong by native speakers, and the meaning will be completely intelligible), so it's plausible to me that a culture might just not like recursion in their language for no apparent reason.

That's very interesting. I thought that all human languages were "Turing-Complete" because otherwise they wouldn't be able to do everything they were used for.

Humans don't compute unbounded loops in their spoken languages. :)

The next time, dont be shy to open an open thread yourself! There is no group-norm that only high-karma-people may make them. http://lesswrong.com/r/discussion/lw/8nv/open_thread_december_2011/

I did not decide not to do so; it did not occur to me as a possibility. (I now recall that I once did create an Open Thread just as you suggest — though almost simultaneously with someone else, so I deleted mine.)

I meant that (my perception at the time of creating this post was that) the LW community no longer uses the open-thread format. Given the immediate success of yours, I conclude that I was wrong.

Steven Pinker talked about this (at unexcerptable length), from an evolutionary standpoint, at the end of Chapter 11 of The Language Instinct.