In my article on trusting expert consensus, I talked about the value of having hard data on the opinions of experts in a given field. The unspoken subtext was that you should be careful of claims of expert consensus that don't have hard data to back them up. I've joked that when a philosopher says there's a philosophical consensus, what he really means is "I talked to a few of my friends about this and they agreed with me."

What's often really happening, though (at least in philosophy) is that the "consensus" really reflects the opinions of a particular academic clique. A sub-group of experts in the field spend a disproportionate amount of time talking to each other, and end up convincing themselves they represent the consensus of the entire profession. A rather conspicuous example of this is what I've called the Plantinga clique on my own blog—theistic philosophers who've convinced themselves that the opinions of Alvin Plantinga represent the consensus of philosophy.

But it isn't just theistic philosophers who do this. When I was in school, it was still possible to hear fans of Quine claim that everyone knew Quine had refuted the analytic synthetic distinction. Post PhilPapers survey, hopefully people have stopped claiming this. And one time, I heard a philosophy blogger berating scientists for being ignorant of the findings in philosophy that all philosophers agree on. I asked him for examples of claims that all philosophers agree on, I responded with examples of philosophers who rejected some of those claims, "Ah," he said, "but they don't count. Let me tell you who's opinions matter..." (I'm paraphrasing, but that was what it amounted to.)

I strongly suspect this happens in other disciplines: supposed "consensuses of experts" are really just the opinions of one clique within a discipline. Thus, I tend to approach claims of consensus in any discipline with skepticism when they're not backed up by hard data. But I don't actually know of verifiable examples of this problem outside of philosophy. Has other people with backgrounds in other disciplines noticed things like this?

New Comment
57 comments, sorted by Click to highlight new comments since: Today at 7:11 PM

Well, I'm a linguist, and yes, we do have that. Actually, it works a lot like the philosophy of religion thing. Researchers within the subdiscipline that deals with X believe X is really important. But outside that subdiscipline/clique are a lot of people who have concluded that X is not important and/or doesn't really exist. Naturally, the people who believe in X publish a lot more about X than the people who think X is a stinking pile of dwagon crap. This can lead to outsiders getting the impression that the field has a consensus position about X=awesome.

The best example I know is the debate about linguistic universals. Chomskyan universalists think that all human languages are fundamentally alike, that there is a genetically determined "universal grammar" which shapes their structure. The Chomskyans are a very strong and impressive clique and a lot of non-linguists get the impression that what they say is what every serious linguist believes. But this is not so. A lot of us think the "universal grammar" stuff is vacuous non-sense which we can't be bothered with.

Starting a big fight with the Chomskyans has not been a good career move for the past half-century but this may be changing. In 2009, a couple of linguists started a shitstorm with the article The Myth of Language Universals: Language diversity and its importance for cognitive science. The abstract starts like this:

Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective.

Suddenly a lot of people are willing to die on this hill, so you can find a very ample supply of recent articles on both sides of this.

Question: my understanding is that the fact that humans manage to learn language so readily in early childhood, when compared with how bad we are at objectively simpler tasks like arithmetic, does suggest we have some kind of innate, specialized "language module", even if the Chomskyan view gets some important details wrong. Would that be generally accepted among linguistics, or is it contentious? And in the latter case, why would it be contentious?

(I ask because this understanding of language is one of the main building blocks in what I understand about human intelligence.)

Great questions. I would say that a majority of linguists probably accept the fast-childhood-acquisition argument for the innateness of language but a lot depends on how the question is phrased. I would agree that language is innate to humans in the weak and banal sense that humans in any sort of natural environment will in short order develop a complex system of communication. But I don't think it follows that we have a specialized language module - we may be using some more generic part of our cognitive capacity. I'm not sure if we really have the data to settle this yet.

The whole thing is tricky. How fast is fast? If humans definitely had no language model and had to learn language using a more generic cognitive ability, how fast would we expect them to do it? Five years? Ten years? Fifty years? Never? I don't know of any convincing argument ruling out that the answer would be "pretty much the speed at which they are actually observed to learn it".

And what qualifies as language, anyway? Deaf children can learn complex sign languages. Is that just as innate as spoken language or are they using a more generic cognitive ability? My one-year-old is a whiz on the iPad. Is he using the language module or a more generic cognitive ability? Is it a language module or a symbolic processing module? Or an abstract-thinking module?

I'm personally very skeptical that the brain has any sort of neatly defined language module - is that really Azathoth's style? There is a lot more to say about this, maybe there'd be enough interest for a top-level post.

I'm personally very skeptical that the brain has any sort of neatly defined language module - is that really Azathoth's style? There is a lot more to say about this, maybe there'd be enough interest for a top-level post.

I would look forward to reading that post.

But I don't think it follows that we have a specialized language module - we may be using some more generic part of our cognitive capacity. I'm not sure if we really have the data to settle this yet.

The whole thing is tricky. How fast is fast? If humans definitely had no language model and had to learn language using a more generic cognitive ability, how fast would we expect them to do it? Five years? Ten years? Fifty years? Never? I don't know of any convincing argument ruling out that the answer would be "pretty much the speed at which they are actually observed to learn it".

Honestly, I suspect the answer is "never"... unless the "more general capacity" is only somewhat more general. Languages seem to be among the most complicated things most people ever learn, with the main competition for the title of "the most complicated" coming from things like "how to interact socially with other humans."

And what qualifies as language, anyway? Deaf children can learn complex sign languages. Is that just as innate as spoken language or are they using a more generic cognitive ability?

What I've read on this is that the way deaf children learn sign language is extremely similar to how most children learn spoken language.

There is a lot more to say about this, maybe there'd be enough interest for a top-level post.

I would totally support a top-level post.

Honestly, I suspect the answer is "never".

You are not alone - that is the orthodox Chomskyan position. Chomsky has argued that grammar is unlearnable given the limited data available to children, and therefore there must be an innate linguistic capacity. This is the celebrated "poverty of the stimulus" argument. Like most of Chomsky's ideas, it is armchair theorizing with little empirical support.

I would totally support a top-level post.

Given the number of replies and upvotes, that seems warranted. I'll try to find the time.

Reading the article:

Certainly, humans are endowed with some sort of predisposition toward language learning. The substantive issue is whether a full description of that predisposition incorporates anything that entails specific contingent facts about natural languages.

So this makes it sound like the only thing the authors are rejecting is the idea of a system with certain rigid assumptions built in - as opposed to, say, a more or less Bayesian system that has a prior which favors certain assumptions without making those assumptions indefeasible. Am I reading that right?

Yes, you're reading that right. They address this even more explicitly at the beginning of section 2.2 on page 17, and, especially in footnotes 5 and 6.

As for the statement that humans have "some sort of predisposition toward language learning", that is weak enough for even me to agree with it. We are social animals, with innate desires to communicate and the intelligence to do so in complex ways.

But I don't think it follows that we have a specialized language module - we may be using some more generic part of our cognitive capacity. I'm not sure if we really have the data to settle this yet.

There was an autistic savant, Chris, whose skill was in learning languages, and who was unable to learn a fake language put together by researchers that used easy but non-attested types of rules (eg. reversing the whole sentence to form a question). What do you make of it?

I've always thought it was fairly weak evidence in the sense that autistic people often have all kinds of other things potentially going on with them, that it's a sample size of 1, and so on.

But I don't think it follows that we have a specialized language module - we may be using some more generic part of our cognitive capacity.

As an ignorant layman, I'd expect a large part of our so-called cognitive capacity to be a poorly hacked-and-generalized language module.

humans manage to learn language so readily in early childhood, when compared with how bad we are at objectively simpler tasks like arithmetic

Children hear adults speaking all the time, but they don't usually hear adults doing maths very often.

I've wondered about that. Someone should try writing an iPad app that a toddler can play with to have their brain bombarded by math, and see if that leads to math coming as naturally to them as language. I doubt it would work but it might be worth trying.

It seems that simply bombarding the brain isn't sufficient, even for language, and that social interaction is required (see this study), so that playing math games with the child would be a better idea.

How does the brain decide whether it thinks of something as a social interaction? I would assume that computer/video games with significant social components hack into that, so hacking into it to teach math should be doable.

I believe the way it works for language is that one can learn it from television, but not radio.

Nope. It needs to be something with feedback.

That makes intuitive sense, at least in hindsight, since TV provides ample non-linguistic information that you can learn to associate with the linguistic information.

I think this book maybe of some interest to you Chris. It was the text book recommend for a CogSci class I did, dealing with how cognitive systems develop in response to their environment.

Also a follow-up question: I remember reading that children who do not learn any language by the age of 7-9 forever lose the capability to acquire a language (examples were children brought up by animals and maybe a couple of cases of child abuse). Is that actually true?

This is a difficult issue. There are very few documented instances of feral children and it is hard to isolate their language deficiency from their other problems.

What we do have a lot of documentation on is children with various types of intellectual disabilities. My four-year old daughter is autistic and has an IQ of 50. Her language is around the level of a 24 month old (though possibly with a bigger vocabulary and worse grammar). Does she have a deficient language module? That doesn't really seem like a great explanation for anything. Her mental deficiencies are much broader than that. If there were a lot of children with deficient language but otherwise normal development that would lend some support to a language module model. But this isn't really the case. If your language is borked that usually means that other things are borked too.

Another thing about my daughter: She's made me realize how smart humans are. A retarded 4 year old is still really really smart compared to other species. My daughter certainly has far more sophisticated language than this guy did. I bet she could beat a chimp in other cognitive tasks as well.

[-][anonymous]10y00

Try looking into Joshua Tenenbamu's cognitive-science research. As I recall, he's a big Bayesian (so LW will love him), and he published a paper about probabilistic learning of causality models in humans. If I had to bet, I would say that evolution came up with a learning system for us that can quickly and dirtily learn many different possible kinds of causality, since the real thing works too quickly for evolution to hardcode a model of it into our brains. Also, the real thing involves assumptions like The Universe Is Lawful that aren't even evolutionarily useful to non-civilized pre-human apes -- it doesn't look lawful to them!

We could then have evolved language out of our ability to learn models of causality, as a way of communicating statements in our learned internal logics. This would certainly explain the way that verbal thinking contains lots more ambiguity, incoherence and plain error than formalized (mathematical) thinking.

I am unclear on what observations I would differentially expect, here.

That is, if I observe that languages vary along dimension X, presumably a Chomskyan says "the universal grammar includes a parametrizable setting for X with the following range of allowed values" and an antiChomskyan simply says "languages can vary with respect to X with the following range of allowed values." The antiChomskyan wins on Occamian grounds (in this example at least) but is this really a hill worth dying on?

The question is whether the variety in human languages is constrained by our biology or by general structural issues which any intelligence which developed a communication system would come up against. This should have implications for cognitive science and maybe AI design.

Note that the anti-Chomskyans are not biology-denying blank-slaters. Geoffrey Sampson, who has written a good book about this, is a racist reactionary.

Ah! Yes, OK, that makes sense. Thanks for clarifying.

I don't have a horse in this race, but I studied linguistics as an undergrad in the 80s so am probably an unexamined Chomskyist by default. That said, I certainly agree that if such general structural constraints exist (which is certainly plausible) then we ought to identify and study them, not just assume them away.

Is there a language that doesn't have any kind of discrete words and concepts? 'cause I'm pretty sure there are possible intelligences that could construct a communication system that uses only approximate quantitative representations (configuration spaces or replaying full sensory) instead of symbols.

This is probably why, in my experience, innateness issues of any kind also don't play a role in the everyday practice of most linguists.

The people who study the issue of natural languages being somehow interestingly constrained by biology are, incidentally, not normal linguists, but they're mixture of computer scientists, mathematical linguists, and psychologists, who look at the formal properties of natural language grammars and their learnability properties. And if there are such constraints, there is of course the further question of whether we're dealing with something that is specific to language, or a general cognitive principle.

Being a much more ordinary linguist, I don't even know what the state of that field is. So basically, I don't really get what all the fuss is about.

A more significant divide in linguists seems to me to be between the people who do formally well-defined stuff and those who don't. Ironically, a lot of Chomskyans fall into the latter category.

Also, there's much more impressive developmental evidence for certain kinds of things being innate than language acquisition.

Oh there are many examples of this throughout science.

In my own area (machine learning), a decade ago there used to be a huge clique of researchers who's "consensus" was that ANNs were dead, SVM+kernel methods were superior, and that few other ML techniques mattered. Actually, the problem was simply that they were training ANNs improperly. Later researchers showed how to properly train ANNs, and the work of the Toronto machine intelligence group especially established that ANNs were quite superior to SVMs for many tasks.

In econometrics, subsequence time series (STS) clustering was widely thought to be a good approach for analyzing market movements. After decades of work and hundreds of papers on this technique, Keogh et al showed in 2005 that the results of STS clustering are actually indistinguishable from noise!

Another one, in physics, was pointed out by Lee Smolin in his book, The Trouble with Physics. In string theory it was commonly, but wrongly, consensus opinion that Mandelstam had proven string theory finite. Actually, he had only eliminated some particular forms of infinities. The work on establishing string theory as finite is still ongoing.

ANNs were dead, SVM+kernel methods were superior, and that few other ML techniques mattered. Actually, the problem was simply that they were training ANNs improperly.

Well... I suppose that characterization is true, but only if you allow the acronym "ANN" to designate a really quite broad class of algorithms.

It was true that multilayer perceptrons trained with backpropagation are inferior to SVMs. It is also true that deep belief networks trained with some kind of Hintonian contrastive divergence algorithm are probably better than SVMs. If you tag both the multilayer perceptrons and the deep belief networks with the "ANN" label, then it is true that the consensus in the field reversed itself. But I think it is more precise just to say that people invented a whole new type of learning machine.

(I'm sure you know all this, I'm commenting for the benefit of readers who are not ML experts).

This is a different type of problem. OP is talking about people saying there is a consensus, when actually there's a lot of disagreement. You're talking about times where there was (some kind of) a consensus, but that consensus was wrong.

That's not clear to me from reading the comment. passive_fist, can you clarify?

In all 3 cases I described except the last, it wasn't a consensus at all, but a percieved consensus within a subset of the community.

I apologize then, that wasn't how I read it. When you said "huge clique" and "widely thought," I thought you were saying that the majority of the field falls into those groups.

[-][anonymous]10y40

Unfortunately in my field (programming languages? I guess?) we just outright get ignored by the mainstream of our own field, even while they crow about how important we supposedly are.

Just last Thursday night I attended a talk in which an Esteemed Elderly Researcher complained, when asked for complaints, that computer scientists had not made enough progress in the verified construction of programs and in better programming languages since he was young, and remarked that everyone should have been listening to Alan Kay.

When I attempted to ask, "What about Simon Peyton Jones, Martin Odersky, and the formal PL community?", he basically acknowledged their existence, ignored our entire research field, and went back to saying not enough progress had been made in programming languages.

Still not sure if I asked wrong (raising one's hand and being called on is socially permissible, yes?), or if the mainstream CS research community (certainly including 100% of my own current department, much to my anguish and dismay on signing up as a grad-student under the naive impression we had a good three or so PL people here) is just davka bent on ignoring the formal study of programming languages and its massive advancements in recent years.

Expert consensus thus represents a concerted effort to ignore expert consensus.

Here is a quick tip for seeing if the apparent academic consensus that says C on topic X is really just a clique: Find a review article on X that assumes C, with a lot of space dedicated to describing at length (and with many citations) all the work that has been done supporting C and building on it. If the C consensus is a clique and the review is minimally honest, it will probably have a small section describing a different perspective, with only one or two citations, which when tracked will prove to be review articles on X as long and impressive as the one you are reading but describing a non-C position.

If only review articles were more common in philosophy...

If the C consensus is a clique and the review is minimally honest,

Some disciplines are so bad that this is not always a safe assumption.

I've joked that when a philosopher says there's a philosophical consensus, what he really means is "I talked to a few of my friends about this and they agreed with me.

I came across a term to describe this phenomenon in linguistics regarding grammaticality judgements: Hey Sallys. The idea being, you form some theory about what's grammatical based on what sounds good to you, you think that you ought to check to make sure you're not just being idiosyncratic, and so you wander out into the grad room/house/water cooler/etc and say "Hey X, how does this sound to you?"

Having said that, there's a paper somewhere showing that individual linguists' grammaticality judgements are just as good as taking a large survey in the vast majority of cases.

Having said that, there's a paper somewhere showing that individual linguists' grammaticality judgements are just as good as taking a large survey in the vast majority of cases.

That's roughly what I've heard too. Sadly, language seems to be unusual in this regard, and in most fields asking a few of your friends is not a reliable method.

Insofar as I am an optimist about the scientific method, I put my trust in the existence of objective incontrovertible tests for different theories in the field. All humans tend to play politics with beliefs, so there will be consensuses and contrarians and cliques in every field. But in fields that agree on tests, those consensuses will correspond to the actual outcome of the tests; while in fields with no tests, politics will overwhelm any signal produced by armchair reasoning.

Mathematicians have excellent tests for the correctness of a proof, so they rarely disagree for long. Physicists have good empirical predictions, so they only disagree about some things.

Linguists might agree on a test, but most of the time that test is not doable in practice, so they can't really know if there are biological universals of language; and so they keep on disagreeing about theories more (I predict) than they disagree about any actually observable fact.

And philosophers, by definition, mostly work on things that have no empirical tests, at least not actually executable ones. So they are used to disagreeing, and also to sometimes agreeing (coming to consensus), without actual objective proof of the thing they agree on. And that's why I expect that "how many philosophers believe X" is a poor tests for "is X true", more so than for mathematicians or physicists or even linguists.

Mathematicians have excellent tests for the correctness of a proof, so they rarely disagree for long.

Seemingly longstanding disagreements in quantitative fields exist, see e.g.:

http://andrewgelman.com/2009/07/05/disputes_about/

edit : this disagreement is about an actual meaningful question with an answer, not things like B vs F (which are arguments about taste, as far as I can tell).

How does that dispute stand today? Is it still running, have the parties reached agreement, or are they not talking to each other?

I will see what I can find out. My guess is there was no resolution.

Mathematicians have excellent tests for the correctness of a proof, so they rarely disagree for long.

I'm not a mathematician, but my impression impression is that this has gotten less true, as the typical proof published in mathematics journals has gotten more convoluted and harder to check. Mathematicians are now often forced to rely on trusting their colleagues to know whether a proof is correct or not. See here.

Not that this makes mathematics any worse off than other fields. I'm pretty sure all fields these days require people to trust their colleagues.

Interesting. What about machine proof checking? Why don't mathematicians publish all results in a formal notation (in addition to the human-oriented one) that allows all proofs to be checked, and entered into an Internet repository available to automated proof assistants?

For the same reason not all software is written in Coq/Agda/other proof systems: it would be incredibly expensive, slow, and demand very rare skills.

Because it takes a lot of extra time and work to formalize a proof to the level where it can be automatically checked.

This thing can happen even in mathematics or theoretical CS, where there can be a gradual growth of a group of people researching something which gets ignored by and/or has no relevance to the mainstream community.

A good example is institutional model theory, whose practicioners think it is the ultimate theory of abstract logic, even though its accomplishments remain to be seen.

There are academic cliques in physics, too, in some sub-disciplines, though not as pronounced. They cite mostly each other, have their own conferences and such.

I actually thought of physics as an example of this, for quantum interpretations: you sometimes see claims that MWI is an absurd theory pushed by a few fringe physicists and popularizers and cranks, or alternately, that every good physicist takes MWI seriously. What do the occasional small surveys reveal? Something in between: a minority or perhaps plurality holding to MWI with agnosticism on the part of many - MWI being now a respectable position to hold but far from dominant or having won.

MWI is not really a good example, but Bohmian mechanics is. These guys have their own publications, they cite each other, they have special conferences even.

Most of the cliques are not visible unless your are in the subfield, however.

I am reminded of a series of documents uploaded to the arxiv earlier this year, each one reporting the results of a survey taken at a distinct conference, and supposedly revealing a "snapshot" of the participants' atitudes towards foundational issues (such as interpretations). Although the first document seems to be making some fairly strong claims about academic consensus, the following two are a little more conservative. The final one says something very similar to the original post here; their results suggest that,

'there exist, within the broad field of "quantum foundations", sub-communities with quite different views, and that (relatedly) there is probably even significantly more controversy about several fundamental issues than the already-significant amount revealed in the earlier poll.'

http://arxiv.org/abs/1301.1069

http://arxiv.org/abs/1303.2719

http://arxiv.org/abs/1306.4646

Some surveys reveal that; other surveys reveal one of the two positions you mentioned in the first sentence of your comment.

[-][anonymous]10y00

Hell, it'd be easy to interpret my own little subfield of analysis in this light.

As someone who is not an expert on anything, I'd be curious to hear some experts weigh in on how opinions within any given field compare (and contrast) with politics, the infamous mind-killer.

Are there similar mechanisms and biases at play? Do people act tribally and use arguments as soldiers in philosphy & linguistics, for instance?

There is a particular way in which they (i.e. linguists) do, but it's not comparable to politics, because it's actually productive. Sometimes you have two competing approaches to a phenomenon, and then people try to extend their own approaches as far as possible, shows that all the data the other wants to explain can be explained in their own terms, etc. This, however, seems to work as a heuristic, in that it makes us explore all the strengths and weaknesses of theories, and it's also sensible insofar as uniting everything under one view would be more parsimoneous. At some point, we might decide that we're straining the theories to far and that actually both of them are valid for some cases and the phenomena are less unified than we thought at first. (Or it might turn out that the two approaches lead to notational variants of the same theory when worked out fully.)

In philosophy:

Philosophy can definitely be a bit tribal, but it's generally nothing like politics. You might say what typically happens is you see one-half of the usual arguments-as-soldiers behavior: all arguments on the opposing side must be stopped. But since it's generally recognized that good philosophical arguments are hard to come by, many philosophers would be happy to have one good argument for their view.