Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. We’ll call this the “median researcher problem”.
Prototypical example: imagine a scientific field in which the large majority of practitioners have a very poor understanding of statistics, p-hacking, etc. Then lots of work in that field will be highly memetic despite trash statistics, blatant p-hacking, etc. Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.
(Defending that claim isn’t really the main focus of this post, but a couple pieces of legible evidence which are weakly in favor:
- People did in fact try to sound the alarm about poor statistical practices well before the replication crisis, and yet practices did not change, so clearly at least some people did in fact see the problem and were in fact not memetically successful at the time. The claim is more general than just statistics-competence and replication, but at least in the case of the replication crisis it seems like the model must be at least somewhat true.
- Again using the replication crisis as an example, you may have noticed the very wide (like, 1 sd or more) average IQ gap between students in most fields which turned out to have terrible replication rates and most fields which turned out to have fine replication rates.
… mostly, though, the reason I believe the claim is from seeing how people in fact interact with research and decide to spread it.)
Two interesting implications of the median researcher problem:
- A small research community of unusually smart/competent/well-informed people can relatively-easily outperform a whole field, by having better internal memetic selection pressures.
- … and even when that does happen, the broader field will mostly not recognize it; the higher-quality memes within the small community are still not very fit in the broader field.
In particular, LessWrong sure seems like such a community. We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, we have fuzzier community norms explicitly intended to avoid various forms of predictable stupidity, and we definitely have our own internal meme population. It’s exactly the sort of community which can potentially outperform whole large fields, because of the median researcher problem. On the other hand, that does not mean that those fields are going to recognize LessWrong as a thought-leader or whatever.
Personally I am quite pleased with the field of parapsychology. For example, they took a human intuition and experience ("Wow, last night when I went to sleep I floated out of my body. That was real!") and operationalized it into a testable hypothesis ("When a subject capable of out of body experiences floats out of their body, they will be able to read random numbers written on a card otherwise hidden to them.") They went and actually performed this experiment, with a decent deal of rigor, writing the results down accurately, and got an impossible result- one subject could read the card. (Tart, 1968.) A great deal of effort quickly went in to further exploration (including military attention with the men who stare at goats etc) and it turned out that the experiment didn't replicate, even though everyone involved seemed to genuinely expect it to. In the end, no, you can't use an out of body experience to remotely view, but I'm really glad someone did the obvious experiments instead of armchair philosophizing.
https://digital.library.unt.edu/ark:/67531/metadc799368/m2/1/high_res_d/vol17-no2-73.pdf is a great read from someone who obviously believes in the metaphysical, and then does a great job designing and running experiments and accurately reporting their observations, and so it's really only a small ding against them that the author draws the wrong larger conclusions in the end.