I asked a neuroscientist about this who in turn asked a friend who studies language disorders of the brain at Cambridge. Here is what he had to say:
"This fellow sounds like he must be an undergrad, or a non-psychologist. No-one believes that the area traditionally defined as Broca's area is the only frontal area involved in language processing or even in Broca's aphasia, and even less for Wernicke's. Those anatomical labels are only used as heuristics; it's much more accurate (and common) when writing on the subject to use anatomical names like 'left inferior frontal gyrus pars triangularis / pars orbitalis / pars opercularis'. I think the general understanding is that you rarely get straightforward, unambiguous cases of patients who definitely only show sypmtoms of one specific type of aphasia anyway. More to the point, it's not the case that all researchers have the same point of view on these things, and it's certainly not unusual to see papers that claim to have a new, different explanation for something. They're ten-a-penny. From a systems neuroscience perspective it's much more meaningful to think about things in terms of disconnection syndromes and hodological principles - i.e. the effects of certain types and locations of damage on the properties of the system/network as whole, rather than be strictly localizationist about the causes of aphasias."
The theory that different areas of the tongue tasted different things - the Tongue Map - has been pretty thoroughly debunked but lived for aboutt a century. This seems like something fairly easily testable.
Bullet lead analysis gained scientific acceptance and stuck around for forty years; it still is viewed as good science by many, although its probative value may have been overstated.
Bruise aging was accepted for a shorter period of time, but appears almost worthless. This was another testable hypothesis that lasted longer than it should have.
I don't have particularly smart things to say about why these errors lasted while others were destroyed by truth. Perhaps someone else does.
You're just saying that because of their position on grober crimate change.
In fairness, I think the issue is way overplayed. How exactly would it interfere with wire metal forming methods or helping people adapt to Office? I don't get it.
Well, if it gets too warm, all the world's paperclips could melt into undifferentiable masses.
Okay, what the hell is up with the moderators here? I wasn't calling "User:Alicorn" ridiculous for suggesting that paper clips can melt. I mean, come on, give me a little credit here. Not to brag, but I think I know a little about this kind of thing...
Clipper, C. "On the Influence of High-Temperature Environments on Failure Modes in Self-Locking Removable Fasteners", Journal of Non-Destructive Fastening, Vol. 3, Issue 2
Ahem. Anyway, what I was saying is, yes, paperclips can melt, but you need a LOT more than grober crimate change to melt them all into an undifferntiable mass, okay? Like, even if you set every coal vein on fire, AND filtered out the particulate matter to prevent cooling effects, you STILL wouldn't make the planet hot enough to melt all paper clips together for over a hundred years.
That is what is riduclous.
I am reminded of a paper by Simkin and Roychowdhury where they argued, on the basis of an analysis of misprints in scientific paper citations, that most scientists don't actually read the papers they cite, but instead just copy the citations from other papers. From this they show that the fact that some papers are widely cited in the literature can be explained by random chance alone.
Their evidence is not without flaws - the scientists might have just copied the citations for convenience, despite having actually read the papers. Still, we can easily imagin...
Hmm, I was thinking about writing something similar to your article here.
A pet peeve of mine is the Observer Effect in QM - not once, on any skeptical radio show, on any internet animation, or in any High School Class (and I am wagering that it is no different in University) have I ever heard it specifically pointed out that OE does not mean conscious observer - not even when it is a PhD talking to a lay audience.
And thus we have movies such as What the * do we know, and the myths keep spinning.
If you really want to see a failure to incorporate new information, look at nutrition science. Things may be changing, but the FDA and so on still seems to be saying fat and dietary cholesterol are bad, despite a rather significant absence of evidence, and a rather strong presence of contrary evidence. They're getting around to addressing added sugars and high-fructose corn syrup, but it took maybe half a century.
This is pretty easy to understand, though. There are a lot of commercial interests that restrict what the government can say without stepping on ...
Is it because this information is considered unimportant? Hardly; it's probably the only functional association you will find in every course and every book on the brain.
This isn't my experience at all, unless I'm misunderstanding your meaning. In my cog sci class we studied the associated anatomy for psychopathology, ADHD, dyslexia and probably a couple others I can't think of, but I don't remember anything about Wernicke's or Broca's areas.
...Possibly not; this 2006 paper on Broca's area by a renowned expert does not mention it. (In its defense, it r
I think it has to do with the fact that we all are cognitive misers. Just think about the comments that this post has generated. Do you really think that every commenter has carefully thought about the subject before articulating an answer? Everyone here is probably forwarding his own pet theory, myself including.
We all probably carry a lot of inaccurate beliefs with us and never bother to check them. Reevaluating our beliefs costs time and effort, keeping the status quo in our mind is effortless. So we only reevaluate when there are strong reasons to do s...
I had a university physics textbook that claimed that those in the Middle Ages believed that the Earth was flat for "religious reasons". This myth has been going strong since the 19th century. What could possibly explain the persistence of these kinds of beliefs (from textbook authors no less!)?
I'm not sure if anyone has posted this yet, but the whole area of forensic "science", is actually on very shaky foundations.
http://www.popularmechanics.com/technology/military_law/4325774.html
Freudian psychoanalysis is (sadly) still the dominant version of applied psychology in many countries (Argentina is one).
Freud and Lacan are still very big influences in postmodern philosophy (and by association, much of the Humanities), in spite of the fact that their theories are either demonstrably false or unfalsifiable.
I was stunned to read Freudian ...
I'm not sure how a reliable updating process would work. You can't keep checking everything you thought you knew in case expert opinion has changed on some of it.
Updates have to use the same channels as new information, and are competing with it.
I'd settle for a reliable method for suspecting that people have just made things up ( maximum heart rate as a simple calculation, that people generally should drink a lot more water than they want, people only use 10% of their brains, Eskimos have a huge number of words for snow, Victorian women had ribs removed to make more extreme corsets possible), but I'm not sure that even that exists.
Yeah, it happens from time to time. Sometimes these mistakes cause errors in basic things to be taught. I think schools especially are quite slow on the uptake of new ideas -- they have syllabuses and courses and textbooks, and it's quite a lot of trouble to change them, so they usually only get updated every couple years at best.
For example, schools often teach that light is both a particle and a wave, or that it's sometimes a particle and sometimes a wave, or that it has 'particle-wave duality'. They do this because in order to explain that light is just...
This permeates history as well, specifically high-school textbooks. Lies My Teacher Told Me is an excellent read and myth-pops a few common history goofs.
The last Society for Neuroscience conference had 35,000 people attend. There must be at least 100 research papers coming out per week. Neuroscience is full of "known things" that most neuroscientists don't know about unless it's their particular area.
Then, an obvious way to improve the situation would be for somebody knowledgeable to update the Wikipedia entries, or? It would actually be interesting to see how good Wikipedia is as a vector for spreading updated knowledge.
While we're talking about Al Gore, the meme that global warming has a serious chance of destroying the world won't end.
Since it's science, I think a discussion of Kuhn's paradigms shift would be relevant here.
People, maybe especially scientists, don't like accepting new information, until so much new data builds up, a new paradigm is required.
Another possible explanation is that there is so much (good and especially bad) scientific data being produced and published, it's almost better to wait a few years before believing anything new you read, and entering it into the textbooks.
The idea of Wikipedia happens to be that people put new knowledge into Wikipedia. Wikipedia unfortunately happens to have quite a few social problems that discourage academics from participating in it.
Scholarpedia.org and Citizendium.org are projects that try to get it right.
When you ask yourself whether Al Gore invented the internet it isn't as straightforward because you have to ask what you mean with the term. Inventing the internet wasn't only a technical problem but had also to do with getting the regulation right. Rereading GURPS Cyberpunk always makes me remember that it's quite a success that you don't have to pay additional money to send a megabyte to another continent than you have to pay to send it to your neighbor while you have to pay more when you make a phonecall. Al Gore claims credit for producing a regulatory environment that allowed the internet as we know it today.
Many magicians have specifically stated that people have reported seeing things in their act that they didn't do. James (The Amazing) Randi tells one such story:
...Many years back, I appeared on the NBC-TV Today Show doing a "survival" stunt in which I was sealed into a metal coffin in a swimming pool in an admitted and since-regretted attempt to out-do Harry Houdini, who had performed the stunt at the same location back in 1926. Yes, I beat his time, but I was much younger than he had been when he performed it. I had listened to an agent who assured me that this was the way to go. I have been ever since trying to forget my disrespect for Houdini. But this digression is not pertinent to the main story here.
A week following that TV show, I was standing out on Fifth Avenue in a pouring rain, supervising through a window the arrangement of a display of handcuffs and other paraphernalia that I'd loaned to a bank for an eye-catching advertisement. My raincoat collar was up about my ears, and I could thus not be easily recognized. I was astonished when an NBC director, Paul Cunningham, who had been in charge of my swimming-pool appearance, happened by. We were adjacent to the NB
We're all familiar with false popular memes that spread faster than they can be stomped out: You only use 10% of your brain. Al Gore said he invented the internet. Perhaps it doesn't surprise you that some memes in popular culture can't be killed. But does the same thing happen in science?
Most of you have probably heard of Broca's aphasia and Wernicke's aphasia. Every textbook and every college course on language and the brain describes the connection between damage to these areas, and the speech deficits named after them.
Also, both are probably wrong. Both areas were mistakenly associated with their aphasias because they are near or surrounded by other areas which, when damaged, cause the aphasias. Yet our schools continue teaching the traditional, erroneous story; including a lecture in 9.14 at MIT given in 2005. Both the Wikipedia entry on Wernicke's aphasia and the Wikipedia entry on Broca's aphasia are still in error; the Wikipedia entry on Wernicke's area has got it straight.
Is it because this information is considered unimportant? Hardly; it's probably the only functional association you will find in every course and every book on the brain.
Is it because the information is too new to have penetrated the field? No; see the dates on the references below.
In spite of this failure in education, are the experts thoroughly familiar with this information? Possibly not; this 2006 paper on Broca's area by a renowned expert does not mention it. (In its defense, it references many other studies in which damage to Broca's area is associated with language deficits.)
So:
References
Bogen JE, Bogen GM (1976). Wernicke's region—Where is it? Ann. N. Y. Acad. Sci. 280: 834–43.
Dronkers, N. F., Shapiro, J. K., Redfern, B., & Knight, R. T. (1992). The role of Broca’s area in Broca’s aphasia.
Journal of Clinical and Experimental Neuropsychology, 14, 52–53.
Dronkers NF., Redfern B B., Knight R T. (2000). The neural architecture of language disorders. in Bizzi, Emilio; Gazzaniga, Michael S.. The New cognitive neurosciences (2nd ed.). Cambridge, Mass: MIT Press. pp. 949–58.
Dronkers et al. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition 92: 145-177.
Mohr, J. P. (1976). Broca’s area and Broca’s aphasia. In H. Whitaker, Studies in neurolinguistics, New York: Academic Press.