How many times have you heard a claim from a somewhat reputable source like "only 28 percent of Americans are able to name one of the constitutional freedoms, yet 52 percent are able to name at least two Simpsons family members"?

Mark Liberman over at Language Log wrote up a post showing how even when such claims are based on actual studies, the methodology is biased to exaggerate ignorance:

The way it works is that the survey designers craft a question like the following (asked at a time when William Rehnquist was the Chief Justice of the United States):

"Now we have a set of questions concerning various public figures. We want to see how much information about them gets out to the public from television, newspapers and the like….
What about William Rehnquist – What job or political office does he NOW hold?"

The answers to such open-ended questions are recorded — as audio recordings and/or as notes taken by the interviewer — and these records are coded, later on, by hired coders.

The survey designers give these coders very specific instructions about what counts as right and wrong in the answers. In the case of the question about William Rehnquist, the criteria for an answer to be judged correct were mentions of both "chief justice" and "Supreme Court". These terms had to be mentioned explicitly, so all of the following (actual answers) were counted as wrong:

Supreme Court justice. The main one.
He’s the senior judge on the Supreme Court.
He is the Supreme Court justice in charge.
He’s the head of the Supreme Court.
He’s top man in the Supreme Court.
Supreme Court justice, head.
Supreme Court justice. The head guy.
Head of Supreme Court.
Supreme Court justice head honcho.

Similarly, the technically correct answer ("Chief Justice of the United States") would also have been scored as wrong (I'm not certain whether it actually occurred or not in the survey responses).

If, every time you heard a claim of the form "Only X% of Americans know Y" you thought "there's something strange about that", then you get 1 rationality point. If you thought "I don't believe that", then you get 2 rationality points.

 

New Comment
23 comments, sorted by Click to highlight new comments since:

So, basically every survey has to deal with bothersome subjects, especially if it's a survey of teens or children (for different reasons).

I remember a survey given by some friends of mine for a school project; they had taught a lesson to a classroom of children and wanted to measure how much stuck. The survey answers were all on a 1-5 scale, where 1 was "I disagree strongly" and 5 was "I agree strongly."

One of the questions, put on there as a test to ensure the children understood the format, was "I eat breakfast with Martin Luther King Jr every morning." (The lesson mentioned him, among others.) They were expecting 1s, but the average answer was 2.

One of the questions, put on there as a test to ensure the children understood the format, was "I eat breakfast with Martin Luther King Jr every morning." (The lesson mentioned him, among others.) They were expecting 1s, but the average answer was 2.

Perhaps the "strongly" in the "disagree strongly" gloss is being understood to require an emotional reaction? It's not a phrase I'd normally use to describe an understanding that something I don't particularly care about is factually wrong.

I may be misremembering it- 1 might have been "false," 2 "mostly false," 3 "neither true nor false," 4 "mostly true," and 5 "true." I do remember that at the time I thought it was a disastrous showing that mostly invalidated the results of their study (or should have had a far more prominent role in their data analysis).

One of the questions, put on there as a test to ensure the children understood the format, was "I eat breakfast with Martin Luther King Jr every morning."

Some of the children probably considered a possibility of an acausal breakfast with Martin Luther King Jr. You don't have to be in the same room or in the same moment to have an acausal breakfast with someone.

The sanity waterline is already raising, and some teachers are scared... :D

At least some public-ignorance surveys use multiple-choice questions, which do not suffer from the problem in the quoted text.

"only 28 percent of Americans are able to name one of the constitutional freedoms, yet 52 percent are able to name at least two Simpsons family members"

My first, almost-like-a-sort-of-trained-reflex reaction to reading something like the above:

"So okay, about 55% of whichever target sub-population this study targeted (default: probably students) watch or hear about The Simpsons often enough to remember at least two names. On the other hand, a lot of that sub-population probably named something as a constitutional freedom after being primed on some unrelated subject but that something wasn't a constitutional freedom, and only about 30% saw the trap and managed to remember an actual good answer."

In general, I'm pretty dubious of conclusions based on polls and questionnaires, and assign lower probability to both the author's and my own interpretation until I see the specifics of the methodology. I have trust issues.

Also, what incentives are there for answering truthfully? The alternative explanations provided a Language Log seem better, but I used to take these sorts of results as primarily being evidence for a high natural frequency of trolls in the sample population.

I seem to recall that the usual result of studies that investigate whether research participants perform better when incentivized is negative. My best guess as to why that would be the case is that people are already surprisingly strongly motivated to do what the researcher wants them to do (remember the Milgram experiment!) I don't remember seeing any studies specifically of how incentives affect general knowledge tests; of course they could be different from other tasks researchers assign to people, but it would surprise me if that were the case.

Nate Silver blew up a public ignorance survey a few years ago, on the 538 blog.

http://www.fivethirtyeight.com/search/label/strategic%20vision

I would not be surprised to find more of that around.

Neat! I'll put less confidence in such surveys now. HOWEVER! Many of the questions in such surveys are plain-ol' 50/50, and I have no idea how they could be very biased.

As an example, here is a scan from Carpini and Keeter's What Americans Know About Politics and Why It Matters. You'll notice that, in table 2.7, only 42% of Americans knew that Soviets suffered more deaths than Americans during World War 2. Seems like a coin flip to me, unless they asked, "Who had the most deaths during World War 2?" and ignored all answers besides US and USSR. I still think Americans are pretty durn ignorant of most political and historical matters. (Myself included, for many of the questions. I have no idea who my state's congressmen are (and I don't really care.))

But then, I've never been one to compare this to modern cultural knowledge. I see that as irrelevant. Asking about fresh memory vs. deep memory doesn't tell you about political knowledge per se. Responses should be compared against questions of similar difficulty.

If, every time you heard a claim of the form "Only X% of Americans know Y" you thought "there's something strange about that", then you get 1 rationality point. If you thought "I don't believe that", then you get 2 rationality points.

Well, if this was indeed a common methodological flaw. I'm not ready to break out the champagne yet.

The Language Log post also emphasizes that mass media reports of such surveys sometimes quote numbers completely different from the actual survey results, presumably to increase the value of the news story. So:

In the passage quoted above, Robin Young states the survey result incorrectly — actually, 73% of respondents, not 28%, were able to name one of the constitutional freedoms – and she spins it in a doubtful direction to boot, because only 65% were able to name one of the Simpsons characters.

In the cited New York Times article, Diane Ravitch is referring to the 2010 NAEP 12th grade U.S. History test, in which 82%, not 2%, of 12th graders correctly identified Brown v. Board of Education.

In addition to discounting "public ignorance" surveys, we should discount surveys and other factual information reported through such media.

This Language Log post gives a much better idea of what's going on. 28% was the number for "more than one" of the constitutional freedoms, which was later commonly misquoted as "one or more". And, of course, there's the matter of picking out a point of the distribution which is the most striking.

In other words: nobody is actually lying about the survey results. Instead, the falsehood is distributed along the chain: the press release states the results in a deliberately misleading way, and subsequent reports on it simply aren't careful to avoid being misled.

The post you linked to argues that the poll and its original press release were deliberately designed to spin results and encourage misunderstanding, and that the error in subsequent reports was a deliberate goal on the part of the pollsters.

Deliberate spinning of statistics isn't different from lying in method or result; the only difference is that they cover themselves by making sure their words are literally true.

Lying and deliberately misleading aren't quite the same thing, although they have the same effect; I would expect the press to do the latter but not the former. So when you implied that the mass media reports did lie, I was confused and decided to dig further.

One practical difference is that, if lying is considered bad but things-close-to-lying aren't, it requires a tertiary source to completely replace the truth by a lie.

They're the same thing consequentially, but different under deontological and virtue ethics, so there's a signalling convention that one is better than the other.

I wasn't skeptical enough of these. -1 point to self. Thanks, Nisan. (:

I actually don't find anything strange about that. I am reasonably well-educated and know a lot of things, and I have no idea who the US supreme court chief justice is (though if I needed to know, it would take me about 2 seconds).

[-]TimS50

The problem is that there's no reasonable way to grade the quoted rejects as false. If you aren't a lawyer (Edit: but maybe if you are*), there's really nothing about labeling John Roberts as Chief Justice of the Supreme Court that is more useful than labeling him as "the justice in charge of the Supreme Court." The error is roughly on par with asking "what does 2+3 =" and accepting "V" but rejecting "IIIII"

In short, I have dramatically adjusted downward my belief the reliability of public-ignorance surveys.


On reflection, I think some of the answers could be considered wrong in a technical sense not relevant to the question being asked. For example, "in charge" implies a bit more power over Supreme Court decisions than Roberts actually possesses.

In the old version, I stated that the difference wouldn't matter even to a lawyer.

I haven't. I expected they were making mistakes like this one, and haven't seen anything indicating they generally make mistakes in this direction rather than the other.

It makes sense to adjust downward your belief that they are reliable, if you thought they were very reliable before. But this shouldn't be enough to indicate they're reliably getting it wrong in a particular direction.

If the ultimate goal is to compare knowledge of the Supreme Court to knowledge of the Simpsons, I would expect the surveys to reliably be wrong in the more sensational direction.

I remember when a few years ago, on the news on TV, there was an article about how 40-70% (forgot the exact number) of the interviewed people said that Beethoven is a dog. I was frustrated at how shocked the other people in the room were.