fiddlemath comments on Open Thread June 2010, Part 3 - Less Wrong

6 Post author: Kevin 14 June 2010 06:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (606)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 15 June 2010 04:20:34AM *  11 points [-]

I mainly have evidence for the absolute level, not necessary for the trend (in science getting worse). For the trend, I could point to Goodhart phenomena like having to rely on the publication per unit time metric being gamed, and getting worse as time progresses.

I also think that in this context, the absolute level is evidence of the trend, when you consider that the number of scientists has increased; if the quality of science in general has not increased with more people, it's getting worse per unit person.

For the absolute level, I've noticed scattered pieces of the puzzle that, against my previous strong presumption, support my suspicions. I'm too sleepy to go into detail right now, but briefly:

  • There's no way that all the different problems being attacked by researchers can be really, fundamentally different: the functionspace is too small for a unique one to exist for each problem, so most should be reducible to a mathematical formalism that can be passed to mathematicians who can tell if it's solvable.

  • There is evidence that such connections are not being made. The example I use frequently is ecologists and the method of adjacency matrix eigenvectors. That method has been around since the 1960s and forms the basis of Google's PageRank, allowing it to identify crucial sites. Ecologists didn't apply it to the problem of identifying critical ecosystem species until a few years ago.

  • I've gone into grad school myself and found that existing explanations of concepts is a scattered mess: it's almost like they don't want you to understand papers or break into advanced topics that are the subject of research. Whenever I understand such a topic, I find myself able to explain it in much shorter time than experts in the field in explained it to me. This creates a fog over research, allowing big mistakes to last for years, with no one ever noticing it because too few eyeballs are on it. (This explanation barrier is the topic of my ever-upcoming article "Explain yourself!")

As an example of what a mess it is (and at risk of provoking emotions that aren't relevant to my point), consider climate science. This is an issue where they have to convince LOTS of people, most of whom aren't as smart. You would think that in documenting the evidence supporting their case, scientists would establish a solid walkthrough: a runnable, editable model with every assumption traceable to its source and all inputs traceable to the appropriate databases.

Yet when climate scientists were in the hot seat last fall and wanted to reaffirm the strength of their case, they had no such site to point anyone to. RealClimate.org made a post saying basically, "Um, anyone who's got the links to the public data, it'd be nice if you could post them here..."

To clarify, I'm NOT trying to raise the issue about AGW being a scam, etc. I'm saying that no matter how good the science is, here we have a case where it's of utmost important to explain research to the masses, and so it would have the most thorough documentation and traceability. Yet here, at the top of the hill, no one bothered to trace out the case from start to finish, fully connecting this domain to the rest of collective scientific knowledge.

Comment author: fiddlemath 15 June 2010 05:54:27AM *  8 points [-]

If the quality of science in general has not increased with more people, it's getting worse per unit person.

Er, I'd just expect to see more science being done. I know of no one studying overall mechanisms of science-as-it-is-realized (little-s "science"), and thereby seriously influencing it. Further, that's not something current science is likely to worry about, unless someone can somehow point to irrefutable evidence that science is underperforming.

All of the points you list are real issues; I watch them myself, to constant frustration. I think they have common cause in the incentive structure of science. The following account has been hinted at many times over around Less Wrong, but spelling it out may make it clear how your points follow:

Researchers focus on churning out papers that can actually get accepted at some highly-rated journal or conference, because the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants. This quantity has a strong effect on scientists' individual futures and their reputations. For all but the most well-established or idealistic scientists, this pressure overrides the drive to promote general understanding, increase the world's useful knowledge, or satisfy curiosity[*].

This pressure means that scientists seek the next publication and structure their investigations to yield multiple papers, rather than telling a single coherent story from what might be several least publishable units. Thus, you should expect little synthesis - a least publishable unit is very nearly the author's research minus the current state of knowledge in a specialized subfield. Thus, as you say, existing explanations are a scattered mess.

Since these explanations are scattered and confusing, it's brutally difficult to understand the cutting edge of any particular subfield. Following publication pressure, papers are engineered to garner acceptance from peer reviewers. Those reviewers are part of the same specialized subfield as the author. Thus, if the author fails to use a widely-known concept from outside his subfield to solve a problem in his paper, the reviewers aren't likely to catch it, because it's hard to learn new ideas from other subfields. Thus, the author has no real motivation to investigate subfields outside of his own expertise, and we have a stable situation. Thus, your first and second points.

All this suggests to me that, if we want to make science better, we need to somehow twiddle its incentive structure. But changing longstanding organizational and social trends is, er, outside of my subfield of study.

[*] This demands substantiation, but I have no studies to point to. It's common knowledge, perhaps, and it's true in the research environments I've found myself in. Does it ring true for everyone else reading this, with appropriate experience of academic research?

Comment author: Douglas_Knight 14 July 2010 08:20:38AM 0 points [-]

It's been broken forever, in basically the same way it is now...

the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants.

No, these are recent developments (though the stuff from your first post may be old). For the first 300 years, scientists were amateurs without grants and no one cared about quantity. For evidence of recent changes, look at the age of NIH PIs

Comment author: Morendil 15 June 2010 06:22:55AM *  0 points [-]

At the conclusion of the interview, Pierre deduces one general lesson : "You can't be inhibited, you must free yourself of the psychological obstacle that consists in being tied to something." Oh no, our friend Pierre is not inhibited ; look how for the past twenty years he has jumped from subject to subject, from boss to boss, from country to country, bringing into action all the differences of potential, seizing polypeptides, selling them off as soon as they begin declining, betting on Monod and then dropping him as soon as he gets bogged down; and here he is, ready to pack his bags again for the West Coast, the title of professor, and a new laboratory. What thing is he accumulating ? Nothing in particular, except perhaps the absence of inhibition, a sort of free energy prepared to invest itself anywhere. Yes, this is certainly he, the Don Juan of knowledge. One will speak of "intellectual curiosity," a "thirst for truth," but the absence of inhibition in fact designates something else : a capital of elements without use value, which can assume any value at all, provided the cycle closes back on itself while always expanding further. Pierre Kernowicz capitalizes the jokers of knowledge.

-- Bruno Latour, Portait of a Biologist as Wild Capitalist

(ETA: see also.)