nerzhin comments on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (272)
When I wrote "What is Bunk?" I thought I had a pretty good idea of the distinction between science and pseudoscience, except for some edge cases. Astrology is pseudoscience, astronomy is science. At the time, I was trying to work out a rubric for the edge cases (things like macroeconomics.)
Now, though, knowing a bit more about the natural sciences, it seems that even perfectly honest "science" is much shakier and likelier to be false than I supposed. There's apparently a high probability that the conclusions of a molecular biology paper will be false -- even if the journal is prestigious and the researchers are all at a world-class university. There's simply a lot of pressure to make results look more conclusive than they are.
In the field of machine learning, which I sometimes read the literature in, there are foundational debates about the best methods. Ideas which very smart and highly credentialed people tout often turn out to be ineffective, years down the road. Apparently smart and accomplished researchers will often claim that some other apparently smart and accomplished researcher is doing it all wrong.
If you don't actually know a field, you might think, "Oh. Tenured professor. Elite school. Dozens of publications and conferences. Huge erudition. That means I can probably believe his claims." Whereas actually, he's extremely fallible. Not just theoretically fallible, but actually has a serious probability of being dead wrong.
I guess the moral is "Don't trust anyone but a mathematician"?
I'm pretty sure you're at least half-joking. But just in case, I need to point out that mathematicians are not immune to this kind of thing.
yep, joke.