In his recent CATO article, Reversed Stupidity Is Not Intelligence, Eliezer writes:
To psychoanalyze these people’s flaws, even correctly, and even if they constitute a numerical majority of the people talking about “quantum,” says nothing at all about whether the smartest people who believe in “quantum” might perhaps be justified in doing so ... there are large numbers of embarrassing people who believe in flying saucers, but this cannot possibly be Bayesian evidence against the presence of aliens, unless you believe that aliens would suppress flying-saucer cults, so that we are less likely to see flying-saucer cults if aliens exist than if they do not exist. So even if you have truly and correctly identified a cluster of people who believe X for very bad, no good, awful, non-virtuous reasons, one does not properly conclude not-X, but rather calls it all not-evidence.
I think the statement makes a correct point - don't dismiss an idea just because a few proponents are stupid - but is too strong as written. In some cases, we can derive information about the truth of a proposition by psychoanalyzing reasons for believing it.
There are certain propositions that people are likely to assert regardless of whether or not they are true. Maybe they're useful for status disputes, or part of a community membership test, or just synchronize well with particula human biases. "X proves the existence of God" commonly gets asserted whether or not X actually proves the existence of God. Anything that supports one race, gender, political party, or ideology over another is also suspect. Let's call these sorts of propositions "popular claims". Some true propositions might be popular claims, but popular claims are popular whether or not they are true.
Some popular claims are surprising. Without knowing anything about modern society, one might not predict that diluting chemicals thousands of times to cure diseases, or claiming the government is hiding alien bodies, would be common failure modes. You don't know these are popular claims until you hear them.
If a very large group of people make a certain assertion, and you always find it to be false, you now have very good evidence that it's a popular claim, a proposition that people will very often assert even if it's false.
Normally, when someone asserts a proposition, you assume they have good evidence for it - in Bayesian terms, the probability that they would assert it is higher if there is evidence than if there is not evidence. But people assert popular claims very often even when there is no evidence for them, so someone asserting a popular claim provides no (or little) evidence for it, leaving you back at whatever its prior is.
Time for an example: suppose two respected archaeologists (who happen to be Mormon) publish two papers on the same day. The first archaeologist claims to have found evidence that Native Americans are descended from ancient Israelites. The second archaeologist claims to have found evidence that Zulus are descended from Australian aborigines.
On the face of it, these two claims are about equally crazy-sounding. But I would be much more likely to pay attention to the claim that the Zulus are descended from aborigines. I know that the Mormons have a bias in favor of believing Indians are descended from Israelites, and probably whatever new evidence the archaeologist thinks she's found was just motivated by this same bias. But no one believes Zulus are descended from Australians. If someone claims they are, she must have some new and interesting reason to think so.
(to put it another way, we expect a Mormon to privilege the hypothesis of Israelite descent; her religion has already picked it out of hypothesis-space. We don't expect a Mormon to privilege the hypothesis of Australian descent, so it's more likely that she came to it honestly).
If then I were to learn that there was a large community of Mormons who interpreted their scripture to say that Zulus were descended from Australians, I would consider it much more likely that the second archaeologist was also just parroting a religious bias, and I would no longer be quite as interested in reading her paper.
In this case, reversed stupidity is intelligence - learning that many people believed in an Australian-Zulu connection for religious reasons decreases my probability that the posited Australian-Zulu connection is real. I can never go lower than my whatever my prior for an Australian - Zulu connection would be, but I can discount a lot of the evidence that might otherwise take me above my prior.
So in summary, a proposition asserted for stupid reasons can raise your probability that it is the sort of proposition that people assert for stupid reasons, which in turn can lower your probability that the next person to assert it will have smart reasons for doing so. Reversed stupidity can never bring the probability of an idea lower than its prior, but it can help you discount evidence that would otherwise bring it higher.
Consider the following books: WWII for Dummies, Rise and Fall of the Third Reich, War as I Knew It, With the Old Breed: At Peleliu and Okinawa, HE WAS MY CHIEF: The Memoirs of Adolf Hitler's Secretary, and The Rising Sun: The Decline and Fall of the Japanese Empire. Which are not primary sources?
Anyone have a response?
Yes. Context matters. The meaning conveyed here is "Which are not primary sources about war or historical event that they are describing?" Anyone answering under that assumption (with the right relative answers) is correct. They get bonus points if they subtly disambiguate the question by including gratuitous "are primary sources about X" so that their words are literally correct not matter how poorly you are communicating.