You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

James_Miller comments on How to fix academia? - Less Wrong Discussion

9 Post author: passive_fist 20 August 2015 12:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread.

Comment author: James_Miller 20 August 2015 01:50:01AM *  6 points [-]

You get what you measure/pay for. I'm actually surprised by how honest academia is given the terrible incentives. When I was on the Stanford Law Review we verified everything, including every single footnote, before publishing an article. While it would be impossible to do this for all scientific articles, how about doing it for those considered the best and not trusting articles that didn't receive this level of attention.

Comment author: MarsColony_in10years 20 August 2015 03:37:23PM 5 points [-]

You get what you measure/pay for.

Sometimes. Monetary gains are a good way of promoting behavior for which good metrics exist, and which require little in the way of creativity, innovation, or initiative. If you pay people per peer review, then there will be people who just skim 10 papers a day and google 1 or 2 random nitpicks. If you try to mandate a time spent per paper, and the paper is more complex than the average paper, then people will not conduct a thorough peer review on those papers. Beware of Goodhart's Law.

I would lead toward trying to make it standard practice for researchers to be expected to spend a large chunk of their time reading the literature and conducting reviews. It would be ideal if it was expected to be a mix of broad, breadth first literature reviews and narrow, topic specific reading.

In my utopian world, I'd like to see things like this: (yes, it's probably a little/lot idealistic, but we can dream)

  • Spending too much or not enough time reading: "I see you've done a lot of good work, but I'm worried that this is coming at the cost of your peer review and other reading duties. It's important to have a strong foundation as well as depth of knowledge. This helps identify knowledge gaps in need of research, and helps prevent unknown-unknowns from jeopardizing projects. How about you spend a couple weeks reading once you've wrapped up the current project?"

  • Publishing too many, or not enough, papers: "You've put out a lot of papers recently. That's good, but I'm worried that this might indicate a lack of thoroughness. What do you think?" "Well, I think it just looked like more work than it was. I kinda selected them because they had a high perceived value, but didn't require as much effort. I'm aiming at low hanging fruit." "Fair enough. That's a legitimate strategy. Just try and make sure you are picking items with high actual value, rather than simply optimizing for perceived value."

  • Rushing through work rather than being thorough (or maybe even being too thorough?): "I just read the peer review of your paper. It seems like you made a couple mistakes that you could have caught by slowing down and being more thorough. For instance, he pointed out a flaw in your experimental design which left room for an alternative explanation of the data. Now we'll have to publish an entire separate experiment in order to confirm that the more likely hypothesis is correct. We're in the business of pushing science forward, NOT publishing papers."

Comment author: Lumifer 20 August 2015 04:33:13PM 1 point [-]

I'd like to see things like this

That sounds a lot like a professor talking to a grad student.

If you have an actually innovative researcher (with probably a big ego), telling him to pause his research and read more is not likely to be productive X-)

Comment author: passive_fist 20 August 2015 04:15:11AM *  2 points [-]

I'm actually surprised by how honest academia is given the terrible incentives.

The consequences for being caught committing fraud (essentially termination of one's career in most cases) are too high. This probably acts as the main opposing force against fraud. Yet it's still apparently not enough.

Comment author: James_Miller 20 August 2015 06:54:55AM *  7 points [-]

The consequences for being caught committing fraud (essentially termination of one's career in most cases) are too high.

Not for "soft fraud" like data mining. And other types of fraud such as fudging the results of an experiment would be really hard to prove given that lots of honestly done experiments don't seem to replicate. Having someone find an error in your analysis certainly isn't cause for firing a tenured professor, and taking this into account I bet some people make deliberate errors that make their analysis more publishable. I've heard it can be sometimes very difficult to get another professor to give up his data, even when the data was used to publish an article in a journal that had a rule saying that you must make your data available upon request.

Comment author: VoiceOfRa 20 August 2015 04:23:43AM 1 point [-]

I'm actually surprised by how honest academia is given the terrible incentives.

How honest is it? Are you sure your not underestimating its honesty.

Comment author: Lumifer 20 August 2015 02:24:02AM 1 point [-]

You get what you measure/pay for.

This.