gwern comments on Distribution of knowledge and standardization in science - Less Wrong

2 [deleted] 27 March 2014 10:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: gwern 27 March 2014 05:39:52PM *  11 points [-]

Rather than speculating in the abstract, the best solution is perhaps to try to come up with concrete standardization ideas, and discuss whether they would work.

Or look at existing efforts. There's two main categories I know of, reporting checklists and quality-evaluation checklists (in addition to the guidelines/recommendations published by professional groups like the APA's manual based apparently on JARS or AERA's standards).

Some reporting checklists:

Some quality-evaluation scales:

I'm generally in favor of these. The recommendations are usually quite reasonable, and a tremendous amount of research neglects the basics or adds in pointless variation - certainly, people will plead that methodologist-nazis will cramp their style, but when I try to meta-analyze creatine and find that in the ~10 studies, they collectively employ something like 31 different endpoints (some simply made up by that particular set of researchers), most for no compelling reason, so that only 2 metrics were used in >3 studies, it's hard for me to take this concern seriously - we clearly are nowhere near an equilibrium where researchers are being herded into using all the same inappropriate metrics.

Comment author: Stefan_Schubert 28 March 2014 10:46:12AM *  3 points [-]

Thanks, very good post.

he recommendations are usually quite reasonable, and a tremendous amount of research neglects the basics or adds in pointless variation - certainly, people will plead that methodologist-nazis will cramp their style, but when I try to meta-analyze creatine and find that in the ~10 studies, they collectively employ something like 31 different endpoints (some simply made up by that particular set of researchers), most for no compelling reason, so that only 2 metrics were used in >3 studies, it's hard for me to take this concern seriously - we clearly are nowhere near an equilibrium where researchers are being herded into using all the same inappropriate metrics.

I agree. This reminds me of this post:

But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.

After all—anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors' grading, and heaven forbid the janitors should speak up in the middle of a colloquium.

It is easy to be naive about the evils of censorship when you already live in a carefully kept garden. Just like it is easy to be naive about the universal virtue of unconditional nonviolent pacifism, when your country already has armed soldiers on the borders, and your city already has police. It costs you nothing to be righteous, so long as the police stay on their jobs.

Standardization of techniques and terminology could be seen as a kind of "censorship" (certainly it is seen as such by the people who talk of methodologist-nazis). Just as academics fail to see how many advantages the kind of censorship the academia's system for excluding ignorants has, it is easy to fail to see the many advantages the standardization of terminology and techniques we already have brings with it. The present academic system, and certain terminology and techniques, have become so natural to us that we fail to see that they are the result of a process which in effect functioned as a kind of censorship ("don't do so and so if you want to be taken seriously"). Hence we fail to see how many advantages such censorship has over anarchism.

Comment author: Metus 27 March 2014 06:16:50PM 1 point [-]

Excellent material as usual, thank you.