Epistemic status: mental model which I have found picks out bullshit surprisingly well.
Idea 1: Parasitic memes tend to be value-claims, as opposed to belief-claims
By "parasitic memes" I mean memes whose main function is to copy themselves - as opposed to, say, actually provide value to a human in some way (so that the human then passes it on). Scott's old Toxoplasma of Rage post is a central example; "share to support X" is another.
Insofar as a meme is centered on a factual claim, the claim gets entangled with lots of other facts about the world; it's the phenomenon of Entangled Truths, Contagious Lies. So unless the meme tries to knock out a person's entire epistemic foundation, there's a strong feedback signal pushing against it if it makes a false factual claim. (Of course some meme complexes do try to knock out a person's entire epistemic foundation, but those tend to be "big" memes like religions or ideologies, not the bulk of day-to-day memes.)
But the Entangled Truths phenomenon is epistemic; it does not apply nearly so strongly to values. If a meme claims that, say, it is especially virtuous to eat yellow cherries from Switzerland... well, that claim is not so easily falsified by a web of connected truths.
Furthermore, value claims always come with a natural memetic driver: if X is highly virtuous/valuable/healthy/good/etc, and this fact is not already widely known, then it’s highly virtuous and prosocial of me to tell other people how virtuous/valuable/healthy/good X is, and vice-versa if X is highly dangerous/bad/unhealthy/evil/etc.
Idea 2: Transposons are ~half of human DNA
There are sequences of DNA whose sole function is to copy and reinsert themselves back into the genome. They're called transposons. If you're like me, when you first hear about transposons, you're like "huh that's pretty cool", but you don't expect it to be, like, a particularly common or central phenomenon of biology.
Well, it turns out that something like half of the human genome consists of dead transposons. Kinda makes sense, if you think about it.
Now we suppose we carry that fact over, by analogy, to memes. What does that imply?
Put Those Two Together...
… and the natural guess is that value claims in particular are mostly parasitic memes. They survive not by promoting our terminal values, but by people thinking it’s good and prosocial to tell others about the goodness/badness of X.
I personally came to this model from the other direction. I’ve read a lot of papers on aging. Whenever I mention this fact in a room with more than ~5 people, somebody inevitably asks “so what diet/exercise/supplements/lifestyle changes should I make to stay healthier?”. In other words, they’re asking for value-claims. And I noticed that the papers, blog posts, commenters, etc, who were most full of shit were ~always exactly the ones which answered that question. To a first approximation, if you want true information about the science of aging, far and away the best thing you can do is specifically look for sources which do not make claims about diet or exercise or supplements or other lifestyle changes being good/bad for you. Look for papers which just investigate particular gears, like “does FoxO mediate the chronic inflammation of arthritis?” or “what’s the distribution of mutations in mitochondria of senescent cells?”.
… and when I tried to put a name on the cluster of crap claims which weren’t investigating gears, I eventually landed on the model above: value claims in general are dominated by memetic parasites.
I think this is true and good advice in general, but recently I've been thinking that there is a class of value-like claims which are more reliable. I will call them error claims.
When an optimized system does something bad (e.g. a computer program crashes when trying to use one of its features), one can infer that this badness is an error (e.g. caused by a bug). We could perhaps formalize this as saying that it is a difference from how the system would ideally act (though I think this formalization is intractable in various ways, so I suspect a better formalization would be something along the lines of "there is a small, sparse change to the system which can massively improve this outcome" - either way, it's clearly value-laden).
The main way of reasoning about error claims is that an error must always be caused by an error. So if we stay with the example of the bug, you typically first reproduce it and then backchain through the code until you find a place to fix it.
For an intentionally designed system that's well-documented, error claims are often directly verifiable and objective, based on how the system is supposed to work. Error claims are also less subject to the memetic driver, since often it's less relevant to tell non-experts about them (though error claims can degenerate into less-specific value claims and become memetic parasites that way).
(I think there's a dual to error claims that could be called "opportunity claims", where one says that there is a sparse good thing which could be exploited using dense actions? But opportunity claims don't seem as robust as error claims are.)