So if it's true that most publications are uninteresting and if it's true that most academics have to publish at a high rate in order to protect their career and send the right signals we don't want to punish and humiliate academics for publishing stupid ideas or badly executed experiments. And when you publish a paper that demonstrates the other party did a terrible job it does exactly that. The signal to noise ratio in academic journals wouldn't increase by much but suddenly academics can simply reach their paper quota by picking the ideas of other academics apart
Surely it's better to have academics picking apart crap than producing crap.
Not necessarily. Ignoring crap may be a better strategy than picking it apart.
Cooperation is also easier when different groups in the same research area don't try too hard to invalidate each other's claims. If the problem in question is interesting you're much better off writing your own paper on it with your own claims and results. You can dismiss the other paper with a single paragraph: "Contrary to the findings of I.C. Wiener in [2] we observe that..." and leave it at that.
The system is entirely broken but I don't see an easy way to make it better.
Related to: Parapsychology: the control group for science, Dealing with the high quantity of scientific error in medicine
Some of you may remember past Less Wrong discussion of the Daryl Bem study, which claimed to show precognition, and was published with much controversy in a top psychology journal, JPSP. The editors and reviewers explained their decision by saying that the paper was clearly written and used standard experimental and statistical methods so that their disbelief in it (driven by physics, the failure to show psi in the past, etc) was not appropriate grounds for rejection.
Because of all the attention received by the paper (unlike similar claims published in parapsychology journals) it elicited a fair amount of both critical review and attempted replication. Critics pointed out that the hypotheses were selected and switched around 'on the fly' during Bem's experiments, with the effect sizes declining with sample size (a strong signal of data mining). More importantly, Richard Wiseman established a registry for advance announcement of new Bem replication attempts.
A replication registry guards against publication bias, and at least 5 attempts were registered. As far as I can tell, at the time of this post the subsequent replications have, unsurprisingly, failed to replicate Bem's results.1 However, JPSP and the other high-end psychology journals refused to publish the results, citing standing policies of not publishing straight replications.
From the journals' point of view, this (common) policy makes sense: bold new claims will tend to be cited more and raise journal status (which depends on citations per article), even though this means most of the 'discoveries' they publish will be false despite their p-values. However, this means that overall the journals are giving career incentives for scientists to massage and mine their data for bogus results, but not to challenge bogus results by others. Alas.
1 A purported "successful replication" by a pro-psi researcher in Vienna turns out to be nothing of the kind. Rather, it is a study conducted in 2006 and retitled to take advantage of the attention on Bem's article, selectively pulled from the file drawer.
ETA: The wikipedia article on Daryl Bem makes an unsourced claim that one of the registered studies has replicated Bem.
ETA2: Samuel Moulton, who formerly worked with Bem, mentions an unpublished (no further details) failed replication of Bem's results conducted before Bem submitted his article (the failed replication was not mentioned in the article).
ETA3: There is mention of a variety of attempted replications at this blog post, with 6 failed replications, and 1 successful replication from a pro-psi researcher (not available online). It is based on this ($) New Scientist article.
ETA4: This large study performs an almost straight replication of Bem (same methods, same statistical tests, etc) and finds the effect vanishes.
ETA5: Apparently, the mentioned replication was again submitted to the British Journal of Psychology: