I'm completely puzzled by the choice of state as the independent variable. The organization of the U.S. university system does not follow state lines in any straightforward way, so that choice of state can show in the causal mechanism only through its correlation with some concrete elements in the organization of the system. However, from what I see, the paper doesn't even speculate on how exactly this could work, and the assumption that there exists competition among researchers at state level strikes me as utterly absurd.
Availability bias? The US is conveniently divided up into 50 chunks, and a lot of statistical information is aggregated at state level, so there's a great convenience for researchers in dividing things up that way, whether it makes sense or not.
From my perspective the big question is the magnitude of these effects: does this just reduce the marginal gain of more scientists/funding for science, or does it change the sign, so that beyond a certain point hiring more scientists actually slows progress? How costly are these false positive results?
Epidemiology is pretty expensive as it is. The sign seems to still be positive for spending more on scientists - diminishing returns have set in hard in some areas like pharmaceuticals, but I haven't heard of an actual net negative.
I suspect that you are correct but I have to wonder if there were a net negative how would we easily tell?
Obviously scientists are not constant in how many problems they cause, or else the answer would be either 'science could never get off the ground' (if they caused more problems than they solved) or 'they're not a net negative (since science is making progress and obviously made it off the ground). So presumably there's some sort of changing marginal returns; usually, marginal returns diminish.
What does it look like if marginal returns are positive? Well, you toss in 1 scientist and get n more units of scientific output. What does it look like if marginal returns have fallen to 0? You toss in 1 more scientist and get 0 more units of scientific output. And if marginal returns have become negative, then you toss in 1 more scientist and see -n units, or scientific output in absolute terms falls.
Currently, all the datapoints I know of like the pharmaceutical industry point to diminishing returns (eg. fall in per-capita output, but not absolute output), and not negative ones. But it's very hard to quantify scientific output...
That's my guess for this particular effect, and overall, but I have heard plausible arguments that when you add up all the different externalities their collective effect is large, proportionally. For instance, competition for grants diverts a lot of time from good scientists to grantsmanship, and reduces the autonomy of young investigators who might otherwise undertake higher-risk research. Further, it seems plausible that at the margin funding brings in lower-quality scientists who produce less value for the negative externalities they produce. More quantitative data would be very nice for testing these claims.
Also, from what I've observed in practice, in many areas, the publish-or-perish competition tends to produce not only this sort of bias, but also terrible competition in producing papers that are written not to present the findings clearly and objectively, but to give the maximum self-promotional spin short of outright lying and fabrication. This problem is especially severe in areas that have run out of low-hanging fruit.
More evidence for this hypothesis:
Fanelli (2010). Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data. PLoS ONE 5(4): e10271.