That is true, instrumental rationality is not taught systematically, not as far as I know, anyway. It is also true that scientists are subject to the same biases as the rest of us. Could be one reason why some people seriously discuss Boltzmann brains or other incarnations of the anthropic principle. Fortunately, they are rarely taken seriously. The point about curiosity stopper is also well taken. Some other points are not as good. For example:
If you come up with a bizarre-seeming hypothesis not yet ruled out by the evidence, and try to test it experimentally, Science doesn't call you a bad person.
Science doesn't, but your advisor, your department or your granting agency will quite likely show you an error in your ways.
Maybe if you're super lucky and get a famous mentor, they'll tell you rare personal secrets like "Ask yourself which are the important problems in your field, and then work on one of those, instead of falling into something easy and trivial"
This is a recipe for failure, unless you get lucky. You can think about deep problems, but you ought to work day-to-day on tractable ones. Einstein switched from the latter to the former, and achieved very little.
Be more careful than the journal editors demand; look for new ways to guard your expectations from influencing the experiment, even if it's not standard.
This advice is true, but also pretty standard.
For what it's worth, one of the first things my advisor taught me was how to recognize confirmation and publishing bias in the numerical sciences. He's explained more than once that some subfields exist only because solving certain problems in the "easier" case was intractable, and so people just gave up on them.
Maybe I'm just lucky to have such an advisor?
My impression(as just finishing up undergrad) is that most of this stuff is floating around the science sphere but that any given scientist is unlikely to receive serious exposure to more than a small portion of it.
For one thing "scientists" are not monolithic,
Interestingly, before the genetic evidence, this was one of the longest-running disputes among historians. As far as I can tell, part of the problem was (and is) that many in the social sciences routinely apply Ockham’s razor in reverse. Simple explanations are bad, even when they fit all the facts. You see this in medicine, too.
I asked a physicist and college professor what Solomonoff Induction was. He said he'd never heard of it. And now this post hits me emotionally.
Would a general physicist have reason to know what Solomonoff induction is? Kolmogorov complexity is in general not computable. In the computable cases it is difficult to compute. As a rough approximation a basic notion of Occam's razor works in essentially all practical contexts.
It's in general uncomputable. Even if you can use it, it's not at all clear that it's the right thing. I would rather an easy-to-use but complicated theory instead of an elegant theory that is intractable. I don't see why a professional scientist would ever benefit from knowing it.
Today's post, Do Scientists Already Know This Stuff? was originally published on 17 May 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Science Isn't Strict Enough, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.