Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. We’ll call this the “median researcher problem”.

Prototypical example: imagine a scientific field in which the large majority of practitioners have a very poor understanding of statistics, p-hacking, etc. Then lots of work in that field will be highly memetic despite trash statistics, blatant p-hacking, etc. Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.

(Defending that claim isn’t really the main focus of this post, but a couple pieces of legible evidence which are weakly in favor:

  • People did in fact try to sound the alarm about poor statistical practices well before the replication crisis, and yet practices did not change, so clearly at least some people did in fact see the problem and were in fact not memetically successful at the time. The claim is more general than just statistics-competence and replication, but at least in the case of the replication crisis it seems like the model must be at least somewhat true.
  • Again using the replication crisis as an example, you may have noticed the very wide (like, 1 sd or more) average IQ gap between students in most fields which turned out to have terrible replication rates and most fields which turned out to have fine replication rates.

… mostly, though, the reason I believe the claim is from seeing how people in fact interact with research and decide to spread it.)

Two interesting implications of the median researcher problem:

  • A small research community of unusually smart/competent/well-informed people can relatively-easily outperform a whole field, by having better internal memetic selection pressures.
  • … and even when that does happen, the broader field will mostly not recognize it; the higher-quality memes within the small community are still not very fit in the broader field.

In particular, LessWrong sure seems like such a community. We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, we have fuzzier community norms explicitly intended to avoid various forms of predictable stupidity, and we definitely have our own internal meme population. It’s exactly the sort of community which can potentially outperform whole large fields, because of the median researcher problem. On the other hand, that does not mean that those fields are going to recognize LessWrong as a thought-leader or whatever.

New Comment
70 comments, sorted by Click to highlight new comments since:

In academic biomedicine, at least, which is where I work, it’s all about tech dev. Most of the development is based on obvious signals and conceptual clarity. Yes, we do study biological systems, but that comes after years, even decades, of building the right tools to get a crushingly obvious signal out of the system of interest. Until that point all the data is kind of a hint of what we will one day have clarity on rather than a truly useful stepping stone towards it. Have as much statistical rigor as you like, but if your methods aren’t good enough to deliver the data you need, it just doesn’t matter. Which is why people read titles, not figure footnotes: it’s the big ideas that really matter, and the labor going on in the labs themselves. Papers are in a way just evidence of work being done.

That’s why I sometimes worry about LessWrong. Participants who aren’t professionally doing research and spend a lot of time critiquing papers over niche methodological issues be misallocating their attention, or searching under the spotlight. The interesting thing is growth in our ability to measure and manipulate phenomena, not the exact analysis method in one paper or another. What’s true w... (read more)

2Garrett Baker
I don’t see how this is any evidence against John’s point. Presumably the reason you need such crushingly obvious results which can be seen regardless of the validity of your statistical tool before the field can move on is because you need to convince the median researchers. The sharp researchers have predictions about where the field is going based on statistical evidence and mathematical reasoning, and presumably can be convinced of the ultimate state far before the median, and work toward proving or disproving their hypotheses, and then once its clear to them, making the case stupidly obvious for the lowest common denominator in the room. And I expect this is where most of the real conceptual progress lies. Even in the word where as you claim this is a marginal effect, if we could speed up any given advance in academic biomedicine by a year, that is an incredible achievement! Many people may die in that year who could’ve been saved had the median not wasted time (assuming the year saved carries over to clinical medicine).

It's not evidence, it's just an opinion!

But I don't agree with your presumption. Let me put it another way. Science matters most when it delivers information that is accurate and precise enough to be decision-relevant. Typically, we're in one of a few states:

  • The technology is so early that no level of statistical sophistication will yield decision-relevant results. Example: most single-cell omics in 2024 that I'm aware of, with respect to devising new biomedical treatments (this is my field).
  • The technology is so mature that any statistics required to parse it are baked into the analysis software, so that they get used by default by researchers of any level of proficiency. Example: Short read sequencing, where the extremely complex analysis that goes into obtaining and aligning reads has been so thoroughly established that undergraduates can use it mindlessly.
  • The technology's in a sweet spot where a custom statistical analysis needs to be developed, but it's also so important that the best minds will do that analysis and a community norm exists that we defer to them. Example: clinical trial results.

I think what John calls "memetic" research is just areas where the topics or themes a... (read more)

4Garrett Baker
The argument seems to be about this stage, and from what I've heard clinical trials indeed take so much more time than is necessary. But maybe I've only heard about medical clinical trials, and actually academic biomedical clinical trials are incredibly efficient by comparison. It also sounds like "community norm exists that we defer to [the best minds]" requires the community to identify who the best minds are, which presumably involves critiquing the research outputs of those best minds according to the standards of the median researcher, which often (though I don't know about biomedicine) ends up being something crazy like h-index or number of citations or number of papers or derivatives of such things.
3DirectedEvolution
It’s not necessary for each person to personally identify the best minds on all topics and exclusively defer to them. It’s more a heuristic of deferring to the people those you trust most defer to on specific topics, and calibrating your confidence according to your own level of ability to parse who to trust and who not to. But really these are two separate issues: how to exercise judgment in deciding who to trust, and the causes of research being “memetic.” I still say research is memetic not because mediocre researchers are blithely kicking around nonsense ideas that take on an exaggerated life of their own, but mainly because of politics and business ramifications of the research. The idea that wine is good for you is memetic both because of its way of poking at “established wisdom” and because the alcohol industry sponsors research in that direction. Similar for implicit bias tests, which are a whole little industry of their own. Clinical trials represent decades of investment in a therapeutic strategy. Even if an informed person would be skeptical that current Alzheimer’s approaches are the way to go, businesses that have invested in it are best served by gambling on another try and hoping to turn a profit. So they’re incentivized to keep plugging the idea that their strategy really is striking at the root of the disease.
4Garrett Baker
I really feel like we're talking past each other here, because I have no idea how any of what you said relates to what I said, except the first paragraph. As for that, what you describe sounds worse than a median researcher problem, instead sounding like a situation ripe for group think instead!
8DirectedEvolution
Yes, I agree it's worse. If ONLY a better understanding of statistics by Phd students and research faculty was at the root of our cultural confusion around science.

I'm not sure median researcher is particularly important here, relatively to, say, median lab leader.

Median voter theorem works explicitly because votes of everyone are equal, but if you have lab/research group leader who disincentivizes bad research practices, then you theoretically should get lab with good research practices.

In practice, lab leaders are often people who Goodhart incentives, which results in current situation.

LessWrong has chance to be better exactly because it is outside of current system of perverse incentives. Although, it has its own bad incentives.

I had the thought while reading the original post that I recall speaking to at least one researcher who, pre-replication crisis, was like "my work is built on a pretty shaky foundation as is most of the research in this field, but what can you do, this is the way the game is played". So that suggested to me that plenty of median researchers might have recognized the issue but not been incentivized to change it.

Lab leaders aren't necessarily in a much better position. If they feel responsibility toward their staff, they might feel even more pressured to keep gaming the metrics so that the lab can keep getting grants and its researchers good CVs.

2quetzal_rainbow
I agree that lab leaders are not in much better position, I just think that lab leaders causally screen off influence of subordinates, while incentives in the system causally screens off lab leaders.

I am not sure about the median researcher. Many fields have a few "big names" that everybody knows and who's opinions have disproportionate weight.

1mattwelborn
I agree. To me, it seems that the big names mostly determine the paradigm and standard practices of their field. I think where we see the "median researcher problem" is in the auxiliary skills, such as statistics in social science. The big names may have blind spots or they just might not have anything to say in these areas. The result is that median skill and ideas fill in the gap.
4johnswentworth
The big names do tend to have disproportionate weight, but they're few in number, and when a big name promotes something the median researcher doesn't understand everyone just kind of shrugs and ignores them. Memeticity selects who the big names are, much more so than vice-versa.

My default assumption for any story that ends with "And this is why our ingroup is smarter than everyone else and people outside won't recognize our genius" is that the story is self-serving nonsense, and this article isn't giving me any reason to think otherwise.

A "userbase with probably-high intelligence, community norms about statistics and avoiding predictable stupidity" describes a lot of academia. And academia has a higher barrier to entry than "taking the time to write some blog articles". The average lesswrong user doesn't need to run an RCT before airing their latest pet theory for the community, so why would it be so much better at selectively spreading true memes than academia is?

I would need a much more thorough gears-level model of memetic spread of ideas, one with actual falsifiable claims (you know, like when people do science) before I could accept the idea of LessWrong as some kind of genius incubator.

community norms which require basically everyone to be familiar with statistics and economics,

I think this is too strong. There are quite a few posts that don't require knowledge of either one to write, read, or comment on. I'm certain that one could easily accumulate lots of karma and become a well-respected poster without knowing either.

[-]Raemon138

Yeah, I didn't read this post and come away with "and this is why LessWrong works great", I came away with a crisper model of "here are some reasons LW performs well sometimes", but more importantly "here is an important gear for what LW needs to work great."

7johnswentworth
Our broader society has community norms which require basically everyone to be literate. Nonetheless, there are jobs in which one can get away without reading, and the inability to read does not make it that much harder to make plenty of money and become well-respected. These statements are not incompatible.
4Kaj_Sotala
Hmm... let me rephrase: it doesn't seem to me like we would actually have a clear community norm for this, at least not one strong enough to ensure that the median community member would actually be familiar with stats and econ.
6jimmy
There's no norm saying you can't be ignorant of stats and read, or even post about things not requiring an understanding of stats, but there's still a critical mass of people who do understand the topic well enough to enforce norms against actively contributing with that illiteracy. (E.g. how do you expect it to go over if someone makes a post claiming that p=0.05 means that there's a 95% change that the hypothesis is true?) Taking it a step further, I'd say my household "has norms which basically require everyone to speak English", but that doesn't mean the little one is quite there yet or that we're gonna boot her for not already meeting the bar. It just means that she has to work hard to learn how to talk if she wants to be part of what's going on. Lesswrong feels like that to me in that I would feel comfortable posting about things which require statistical literacy to understand, knowing that engagement which fails to meet that bar will be downvoted rather than getting downvoted for expecting to find a statistically literate audience here.

Show me a field where replication crises tear through, exposing fraud and rot and an emperor that never had any clothes, a field where replications fail so badly that they result in firings and polemics in the New York Times and destroyed careers- and then I will show you a field that is a little confused but has the spirit and will get there sooner or later.

What you really need to look out for are fields that could never, on a conceptual level, have a devastating replication crisis. Lesswrong sometimes strays a little close to this camp.

[-]gwern2415

Show me a field where replication crises tear through, exposing fraud and rot and an emperor that never had any clothes, a field where replications fail so badly that they result in firings and polemics in the New York Times and destroyed careers- and then I will show you a field that is a little confused but has the spirit and will get there sooner or later.

So... parapsychology? How'd that work out? Did they have the (ahem) spirit and get there sooner or later?

[-]Hastings18-27

Personally I am quite pleased with the field of parapsychology. For example, they took a human intuition and experience ("Wow, last night when I went to sleep I floated out of my body. That was real!") and operationalized it into a testable hypothesis ("When a subject capable of out of body experiences floats out of their body, they will be able to read random numbers written on a card otherwise hidden to them.") They went and actually performed this experiment, with a decent deal of rigor, writing the results down accurately, and got an impossible result- one subject could read the card. (Tart, 1968.) A great deal of effort quickly went in to further exploration (including military attention with the men who stare at goats etc) and it turned out that the experiment didn't replicate, even though everyone involved seemed to genuinely expect it to. In the end, no, you can't use an out of body experience to remotely view, but I'm really glad someone did the obvious experiments instead of armchair philosophizing. 

https://digital.library.unt.edu/ark:/67531/metadc799368/m2/1/high_res_d/vol17-no2-73.pdf is a great read from someone who obviously believes in the metaphysical, and then does a great job designing and running experiments and accurately reporting their observations, and so it's really only a small ding against them that the author draws the wrong larger conclusions in the end.

[-]ROM104

a field where replications fail so badly that they result in firings and polemics in the New York Times and destroyed careers-

A field can be absolutely packed with dreadful research and still see virtually no one getting fired. Take, for instance, the moment a prominent psychologist dubbed peers who questioned methodological standards as “methodological terrorists.” It’s the kind of rhetoric that sends a clear message: questioning sloppy methods isn’t just unwelcome; it’s practically heretical.

Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. [...] Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.

 

This assumes the median researchers can't recognize who the competent researchers are, or otherwise don't look to them as thought leaders.

I'm not arguing that this isn't often the case, just that it isn't alw... (read more)

[-]zoop94

Eh. feels wrong to me. Specifically, this argument feels over-complicated.

As best I can tell, the predominant mode of science in replication-crisis affected fields is that they do causal inference by sampling from noisy posteriors. 

The predominant mode of science in non-replication-crisis affected fields is that they don't do this or do this less. 

Most of the time it seems like science is conducted like that in those fields because they have to. Can you come up with a better way of doing Psychology research? Science in hard fields is hard is definitely a less sexy hypothesis, but it seems obviously true? 

They’re measuring a noisy phenomenon, yes, but that’s only half the problem.  The other half of the problem is that society demands answers.  New psychology results are a matter of considerable public interest and you can become rich and famous from them.  In the gap between the difficulty of supply and the massive demand grows a culture of fakery.  The same is true of nutrition— everyone wants to know what the healthy thing to eat is, and the fact that our current methods are incapable of discerning this is no obstacle to people who claim to know.

For a counterexample, look at the field of planetary science.  Scanty evidence dribbles in from occasional spacecraft missions and telescopic observations, but the field is intellectually sound because public attention doesn’t rest on the outcome.

[-]jimmy115

Can you come up with a better way of doing Psychology research?

 

Yes. More emphasis on concrete useful results, less emphasis on trying to find simple correlations in complex situations.

For example, "Do power poses work?". They did studies like this one where they tell people to hold a pose for five minutes while preparing for a fake job interview, and then found that the pretend employers pretended to hire them more often in the "power pose" condition. Even assuming there's a real effect where those students from that university actually impress those judges more when they pose powerfully ahead of time... does that really imply that power posing will help other people get real jobs and keep them past the first day? 

That's like studying "Are car brakes really necessary?" by setting up a short track and seeing if the people who run the "red light" progress towards their destination quicker. Contrast that with studying the cars and driving behaviors that win races, coming up with your theories, and testing them by trying to actually win races. You'll find out very quickly if your "brakes aren't needed" hypothesis is a scientific breakthrough or foolishly naive.

Instead of stu... (read more)

9Viliam
The reason I trust research in physics in general is that it doesn't end with publishing a paper. It often ends with building machines that depend on that research being right. We don't just "trust the science" that light is a wave; we use microwave ovens at home. We don't just "trust the science" that relativity is right; we use the relativistic equations to adjust GPS measurements. Therefore it would be quite surprising to find out that any of these underlying theories is wrong. (I mean, it could be wrong, but it would have to be wrong in the right way that still keeps the GPS and the microwave ovens working. That limits the possibilities of what the alternative theory could be.) Therefore, in a world where we all do power poses all the time, and if you forget to do them, you will predictably fail the exam... ...well, actually that could just be a placebo effect. (Something like seeing a black cat on your way to exam, freaking out about it, and failing to pay full attention to the exam.) Damn!
4jimmy
  Well said. I'm gonna have to steal that.   Yeah, "Can I fail my exam" is a bad test, because when the test is "can I fail" then it's easy for the theory to be "wrong in the right way". GPS is a good test of GR because you just can't do it without a better understanding of spacetime so it has to at least get something right even if it's not the full picture. When you actually use the resulting technology in your day to day life and get results you couldn't have gotten before, then it almost doesn't matter what the scientific literature says, because "I would feel sorry for the good Lord. The theory is correct.".  There are psychological equivalents of this, which rest on doing things that are simply beyond the abilities of people who lack this understanding. The "NLP fast phobia cure" is a perfect example of this, and I can provide citations if anyone is interested. I really get a kick out of the predictable arguments between those who "trust the science" but don't understand it, and those who actually do it on a regular basis. This reminds me of an amusing anecdote. I had a weird experience once where I got my ankle sprained pretty bad and found myself simultaneously indignantly deciding that my ankle wasn't going to swell and also thinking I was crazy for feeling like swelling was a thing I could control -- and it didn't swell. I told my friend about this experience, and while she was skeptical and thought it sounded crazy, she tried it anyway and her next several injuries didn't swell. Eventually she casually mentioned to someone "Nah, my broken thumb isn't going to swell because I decided not to", and the person she was talking to responded as if she had said something else because his brain just couldn't register what she actually said as a real possibility. She then got all self conscious about it and was kinda unintentionally gaslighted into feeling like she was crazy for thinking she could do that, and her thumb swelled up.  I had to call her and re
8johnswentworth
The problem with this model is that the "bad" models/theories in replication-crisis-prone fields don't look like random samples from a wide posterior. They have systematic, noticeable, and wrong (therefore not just coming from the data) patterns to them - especially patterns which make them more memetically fit, like e.g. fitting a popular political narrative. A model which just says that such fields are sampling from a noisy posterior fails to account for the predictable "direction" of the error which we see in practice.
5zoop
I made an omission mistake in just saying "sampling from noisy posteriors," note I didn't say they were performing unbiased sampling.  To extend the Psychology example: a study could be considered a sampling technique of the noisy posterior. You appear to be arguing that the extent to which this is a biased sample is a "skill issue."  I'm arguing that it is often very difficult to perform unbiased sampling in some fields; the issue might be a property of the posterior and not that the researcher has a weak prefrontal cortex. In this framing it would totally make sense if two researchers studying the same/correlated posterior(s) are biased in the same direction–its the same posterior!

Curated. I think this model is pretty useful and well-compressed, and I'm glad to be able to concisely link to it.

Insofar as it is accurate, the policy implications are still much open to debate, for here on LessWrong and for other ecosystems in the world.

[-]ROM60

People did in fact try to sound the alarm about poor statistical practices well before the replication crisis, and yet practices did not change, 

This rings painfully true. As early as the late 1950s, at least one person was already raising a red flag about the risks that psychology[1] might veer into publishing a sea of false claims:

There is some evidence that in fields where statistical tests of significance are commonly used, research which yields nonsignificant results is not published. Such research being unknown to other investigators m

... (read more)

Could you please elaborate on what you mean by "highly memetic" and "internal memetic selection pressures"? I'm probably not the right audience for this piece, but that particular word (memetic) is making it difficult for me to get to grips with the post as a whole. I'm confused if you mean there is a high degree of uncritical mimicry, or if you're making some analogy to 'genetic' (and what that analogy is...)

9johnswentworth
It is indeed an anology to 'genetic'. Ideas "reproduce" via people sharing them. Some ideas are shared more often, by more people, than others. So, much like biologists think about the relative rate at which genes reproduce as "genetic fitness", we can think of the relative rate at which ideas reproduce as "memetic fitness". (The term comes from Dawkins back in the 70's; this is where the word "meme" originally came from, as in "internet memes".)
[-]gjm158

I think you're using "memetic" to mean "of high memetic fitness", and I wish you wouldn't. No one uses "genetic" in that way.

An idea that gets itself copied a lot (either because of "actually good" qualities like internal consistency, doing well at explaining observations, etc., or because of "bad" (or at least irrelevant) ones like memorability, grabbing the emotions, etc.) has high memetic fitness. Similarly, a genetically transmissible trait that tends to lead to its bearers having more surviving offspring with the same trait has high genetic fitness. On the other hand, calling a trait genetic means that it propagates through the genes rather than being taught, formed by the environment, etc., and one could similarly call an idea or practice memetic if it comes about by people learning it from one another rather than (e.g.) being instinctive or a thing that everyone in a particular environment invents out of necessity.

When you say, e.g., "lots of work in that field will be highly memetic despite trash statistics, blatant p-hacking, etc." I am pretty certain you mean "of high memetic fitness" rather than "people aware of it are aware of it because they learned of it from others r... (read more)

5johnswentworth
I have split feelings on this one. On the one hand, you are clearly correct that it's useful to distinguish those two things and that my usage here disagrees with the analogous usage in genetics. On the other hand, I have the vague impression that my usage here is already somewhat standard, so changing to match genetics would potentially be confusing in its own right. It would be useful to hear from others whether they think my usage in this post is already standard (beyond just me), or they had to infer it from the context of the post. If it's mostly the latter, then I'm pretty sold on changing my usage to match genetics.
5Thane Ruthenis
Your use of "memetic" here did struck me as somewhat idiosyncratic; I had to infer it. I would have used "memetically viral" and derivatives in its place. (E. g., in place of "lots of work in that field will be highly memetic despite trash statistics", I would've said "lots of ideas in that field will be highly viral despite originating from research with trash statistics" or something.)
2ryan_b
To me memetic normally reads something like "has a high propensity to become a meme" or "is meme-like" I had no trouble interpreting the post from this basis. I push back against trying to hew closely to usages from the field of genetics. Fundamentally I feel like that is not what talking about memes is for; it was an analogy from the start, not meant for the same level of rigor. Further, memes and how meme-like things are is much more widely talked about than genetics, so insofar as we privilege usage considerations I claim switching to one matching genetics would require more inferential work from readers overall because the population of readers conversant with genetics is smaller. I also feel like the value of speaking in terms of memes in the post is that the replication crises is largely the fault of non-rigorous treatment; that is to say in many fields the statistical analysis parts really were/are more of a meme inside the field rather than a rigorous practice. People just read other people's published papers analysis sections, and write something shaped like that, replicability be damned.
2habryka
Yep, it seems like pretty standard usage to me (and IMO seems conceptually fine, despite the fact that "genetic" means something different, since for some reason using "memetic" in the same way feels very weird or confused to me, like I would almost never say "this has memetic origin")
5johnswentworth
... though now that it's been pointed out, I do feel like I want a short handle for "this idea is mostly passed from person-to-person, as opposed to e.g. being rederived or learned firsthand". I also kinda now wish "highly genetic" meant that a gene has high fitness, that usage feels like it would be more natural.

I think in principle it makes sense in the same sense “highly genetic” makes sense. If a trait is highly genetic, then there’s a strong chance for it to be passed on given a reproductive event. If a meme is highly memetic, then there’s a strong chance for it to be passed on via a information transmission.

In genetic evolution it makes sense to distinguish this from fitness, because in genetic evolution the dominant feedback signal is whether you found a mate, not the probability a given trait is passed to the next generation.

In memetic evolution, the dominant feedback signal is the probability a meme gets passed on given a conversation, because there is a strong correlation between the probability someone passes on the information you told them, and getting more people to listen to you. So a highly memetic meme is also incredibly likely to be highly memetically fit.

2Garrett Baker
I definitely had no trouble understanding the post, and the usage seems very standard among blogs I read and people I talk to.
[-]Linch*51

Again using the replication crisis as an example, you may have noticed the very wide (like, 1 sd or more) average IQ gap between students in most fields which turned out to have terrible replication rates and most fields which turned out to have fine replication rates.

This is rather weak evidence for your claim ("memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers"), unless you additionally posit another mechanism like "fields with terrible replication rat... (read more)

2johnswentworth
Why would that be relevant?
4Linch
If the means/medians are higher, the tails are also higher as well (usually).  Norm(μ=115, σ=15) distribution will have a much lower proportion of data points above 150 than Norm(μ=130, σ=15). Same argument for other realistic distributions. So if all I know about fields A and B is that B has a much lower mean than A, by default I'd also assume B has a much lower 99th percentile than A, and much lower percentage of people above some "genius" cutoff. 
5johnswentworth
Oh I see, you mean that the observation is weak evidence for the median model relative to a model in which the most competent researchers mostly determine memeticity, because higher median usually means higher tails. I think you're right, good catch.
[-]Algon*42

community norms which require basically everyone to be familiar with statistics and economics

I disagree. At best, community norms require everyone to in principle be able to follow along with some statistical/economic argument. 
That is a better fit with my experience of LW discussions. And I am not, in fact, familiar with statistics or economics to the extent I am with e.g. classical mechanics or pre-DL machine learning. (This is funny for many reasons, especially because statistical mechanics is one of my favourite subjects in physics.) But it remain... (read more)

I think the central claim is true in some fields and not particularly true in others. But one variation of the claim that interests me is, if we consider memeticity relative to public opinions/awareness, then I agree with this more strongly, where the opinions of median researchers are disproportionately more popular than more correct opinions. In some sense, I think the "most correct opinions" are basically never the most popular to the public, though perhaps I wouldn't invoke the median researcher problem to explain this edge case, but a more specific hy... (read more)

I happened to read a Quanta article about equivalence earlier, and one of the threads is the difficulty of a field applying a big new concept without the expository and distillation work of putting stuff into textbooks/lectures/etc.

That problem pattern-matches with the replication example, but well-motivated at the front end instead of badly-motivated at the back end. It still feels like exposition and distillation are key tasks that govern the memes-in-the-field passed among median researchers.

I strongly suspect the crux of the replication crisis example ... (read more)

On the other hand, that does not mean that those fields are going to recognize [a smart outsider] as a thought-leader or whatever.

Could this be a more general problem that people in academia are very unlikely to recognize anything that originates outside of academia?

I mean, first you get your degree by listening to people in academia, and reading textbooks written by people in academia. Then you learn to solve problems, defined by people in academia, under the leadership of people in academia, mostly by reading a lot of papers written and reviewed by peopl... (read more)

4danwil
If an outsider's objective is to be taken seriously, they should write papers and submit them to peer review (e.g. conferences and journals). Yann LeCun has gone so far to say that independent work only counts as "science" if submitted to peer review: "Without peer review and reproducibility, chances are your methodology was flawed and you fooled yourself into thinking you did something great." - https://x.com/ylecun/status/1795589846771147018?s=19. From my experience, professors are very open to discuss ideas and their work with anyone who seems serious, interested, and knowledgeable. Even someone inside academia will face skepticism if their work uses completely different methods. They will have to very convincingly prove the methods are valid.
4Viliam
I agree, but there are two different perspectives: * whether the outsider wants to be taken seriously by academia * whether the people in academia want to collect knowledge efficiently From the first perspective, of course, if you want to be taken seriously, you need to play by their rules. And if you don't, then... those are your revealed preferences, I guess. It is the second perspective I was concerned about. I agree that the outsiders are often wrong. But, consider the tweet you linked: It seems to me that from the perspective of a researcher, taking ideas of the outsiders who have already developed successful products based on them, and examining them scientifically (and maybe rejecting them afterwards), should be a low-hanging fruit. I am not suggesting to treat the ideas of the outsiders as scientific. I am suggesting to treat them as "hypotheses worth examining". Refusing to even look at a hypothesis because it is not scientifically proven yet, that's putting the cart before the horse. Hypotheses are considered first, scientifically proved later; not the other way round. All scientific theories were non-scientific hypotheses first, at the moment they were conceived. Choosing the right hypothesis to examine, is an art. Not a science yet; that is what it becomes after we examine it. In theory, any (falsifiable) hypothesis could be examined scientifically, and afterwards confirmed or rejected. In practice, testing completely random hypotheses would be a waste of time; they are 99.9999% likely to be wrong, and if you don't find at least one that is right, your scientific career is over. (You won't become famous by e.g. testing million random objects and scientifically confirming that none of them defies gravity. Well, you probably would become famous actually, but in the bad way.) From the Bayesian perspective, what you need to do is test hypotheses that have a non-negligible prior probability of being correct. From the perspective of the truth-seeker,
[-]danwil110

In entrepreneurship, there is the phrase "ideas are worthless". This is because everyone already has lots of ideas they believe are promising. Hence, a pre-business idea is unlikely to be stolen.

Similarly, every LLM researcher already has a backlog of intriguing hypotheses paired with evidence. So an outside idea would have to seem more promising than the backlog. Likely this will require the proposer to prove something beyond evidence.

For example, Krizhevsky/Sutskever/Hinton had the idea of applying then-antiquated neural nets to recognize images. Only when they validated this in the ImageNet competition did their idea attract more researchers.

This is why ideas/hypotheses -- even with a bit of evidence -- are not considered very useful. What would be useful is to conclusively prove an idea true. This would attract lots of researchers ... but it turns out to be incredibly difficult to do, and in some cases requires sophisticated techniques. (The same applies to entrepreneurship. Few people will join/invest until you validate the idea and lower the risk.)

Sidenote: There are countless stories of academics putting forth original, non-mainstream ideas, only to be initially rejected by their peers (e.g. Cantor's infinity). I believe this not to be an issue just with outsiders, but merely that extraordinary claims require extraordinary proof. ML is an interesting example, because lots of so-called outsiders without a PhD now present at conferences!

What would this say about subculture gatekeeping? About immigration policy?

I think one thing that's missing here is that you're making a first-order linear approximation of "research" as just continually improving in some direction. I would instead propose a quadratic model where there is some maximal mode of activity in the world, but this mode can face certain obstacles that people can remove. Research progress is what happens when there's an interface for removing obstacles that people are gradually developing familiarity with (for instance because it's a newly developed interface).

Different people have different speeds by whi... (read more)

Prototypical example: imagine a scientific field in which the large majority of practitioners have a very poor understanding of statistics, p-hacking, etc. Then lots of work in that field will be highly memetic despite trash statistics, blatant p-hacking, etc. Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.

Complicated analysis (like going far beyond p-values) is easy for anyone to see and it is evidence of effort. Comple... (read more)

Is your concern simply in the 'median' researchers unfamiliarity with basic statistics? or the other variables that typically accompany a researcher without basic statistics knowledge?

On a different note, 
Due to your influence/status in the field, I think it would be awesome for you to set a clear-cut resource detailing what you would like to see the 'median' researchers do that they are not (other than the obvious frustration regarding statistics incompetence stated above). 



 

3johnswentworth
I don't think statistics incompetence is the One Main Thing, it's just an example which I expect to be relatively obvious and legible to readers here.

A small research community of unusually smart/competent/well-informed people can relatively-easily outperform a whole field, by having better internal memetic selection pressures.

 

It's not obvious to me that this is true, except insofar as a small research community can be so unusually smart/competent/etc that their median researcher is better than a whole field's median researcher so they get better selection pressure "for free". But if an idea's popularity in a wide field is determined mainly by its appeal to the median researcher, I would naturally... (read more)

8Raemon
I'm a bit surprised this is the crux for you. Smaller communities have a lot more control over their gatekeeping because, like, they control it themselves, whereas the larger field's gatekeeping is determined via openended incentives in the broader world that thousands (maybe millions?) of people have influence over. (There's also things you could do in addition to gatekeeping. See Selective, Corrective, Structural: Three Ways of Making Social Systems Work) (This doesn't mean smaller research communities automatically have good gatekeeping or other mechanisms, but it doesn't feel like a very confusing or mysterious problem on how to do better)
1Ninety-Three
Does the field of social psychology not control the gatekeeping of social psychology? I guess you could argue that it's controlled by whatever legislative body passes the funding bills, but most of the social psychology incentives seem to be set by social psychologists, so both small and large communities control their gatekeeping themselves and it's not obvious to me why smaller ones would do better. At some level of smallness your gatekeeping can be literally one guy who decides whether an entrant is good enough to pass the gate, and I acknowledge that that seems like it could produce better than median selection pressure. But by the time you get big enough that you're talking about communities collectively controlling the gatekeeping... aren't we just describing the same system at a population of one thousand vs one hundred thousand? I could imagine an argument that yes actually, differences of scale matter because larger communities have intrinsically worse dynamics for some reason, but if that's the angle I would expect to at least hear what the reason is rather than have it be left as self-evident.
2Raemon
An individual Social Psychology lab (or lose collection of labs) can choose who to let in. Frontier Lab AI companies can decide who to hire, and what sort of standards they want internally (and maybe, in a lose alliance with other Frontier Lab companies). The Immoral Mazes outlines some reasons that you might think large institutions are dramatically worse than smaller ones (see: Recursive Middle Manager Hell for a shorter intro, although I don't spell out the part argument about how mazes are sort of "contagious" between large institutions) But the simpler argument is "the fewer people you have, the easier it is for a few leaders to basically make personal choices based on their goals and values," rather than selection effects resulting in the largest institutions being better modeled as "following incentives" rather than "pursuing goals on purpose." (If an organization didn't follow the incentives, they'd be outcompeted by one that does)

How do you think competent people can solve this problem within their own fields of expertise? 

For example, the EA community is a small & effective community like you've referenced for commonplace charity/altruism practices. 

How could we solve the median researcher problem & improve the efficacy & reputation of altruism as a whole?

Personally, I suggest taking a marketing approach. If we endeavor to understand important similarities between "median researchers", so that we can talk to them in the language they want to hear, we may be a... (read more)

Well, I hope that the self-importance shown in this post is not a true reflection of the community; although unfortunately I think it might well be.

Spurious correlation here, big time, imho.

Give me the natural content of the field and I bet I easily predict whether it may or may not have replication crisis, w/o knowing the exact type of students it attracts.

I think it's mostly that the fields where bad science may be sexy and less-trivial/unambiguous to check, or, those where you can make up/sell sexy results independently of their grounding, may, for whichever reason, also be those that attract the non-logical students.

 

Agree though with the mob overwhelming the smart outliers, but I just think how much that mob creates a replication crises is at least in large part dependent on the intrinsic nature of the field rather than due to the exact IQs.

We argue that memeticity—the survival and spread of ideas—is far more complex than the influence of average researchers or the appeal of articulate theories. Instead, the persistence of ideas in any field depends on a nuanced interplay of feedback mechanisms, boundary constraints, and the conditions within the field itself.

In fields like engineering, where feedback is often immediate and tied directly to empirical results, ideas face constant scrutiny through testing and validation. Failed practices here are swiftly corrected, creating a natural selection ... (read more)

Curated and popular this week