by [anonymous]
3 min read

21

From Gene Expression by Razib Khan who some of you may also know from the old gnxp site or perhaps from his BHTV debate with Eliezer.

Fifteen years ago John Horgan wrote The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age. I remain skeptical as to the specific details of this book, but Carl’s write-up in The New York Times of a new paper in PNAS on the relative commonness of scientific misconduct in cases of retraction makes me mull over the genuine possibility of the end of science as we know it. This sounds ridiculous on the face of it, but you have to understand my model of and framework for what science is. In short: science is people. I accept the reality that science existed in some form among strands of pre-Socratic thought, or among late antique and medieval Muslims and Christians (not to mention among some Chinese as well). Additionally, I can accept the cognitive model whereby science and scientific curiosity is rooted in our psychology in a very deep sense, so that even small children engage in theory-building.

That is all well and good. The basic building blocks for many inventions and institutions existed long before their instantiation. But nevertheless the creation of institutions and inventions at a given moment is deeply contingent. Between 1600 and 1800 the culture of science as we know it emerged in the West. In the 19th and 20th centuries this culture became professionalized, but despite the explicit institutions and formal titles it is bound together by a common set of norms, an ethos if you will. Scientists work long hours for modest remuneration for the vain hope that they will grasp onto one fragment of reality, and pull it out of the darkness and declare to all, “behold!” That’s a rather flowery way of putting the reality that the game is about fun & fame. Most will not gain fame, but hopefully the fun will continue. Even if others may find one’s interests abstruse or esoteric, it is a special thing to be paid to reflect upon and explore what one is interested in.

Obviously this is an idealization. Science is a highly social and political enterprise, and injustice does occur. Merit and effort are not always rewarded, and on occasion machination truly pays. But overall the culture and enterprise muddle along, and are better in terms of yielding a better sense of reality as it is than its competitors. And yet all great things can end, and free-riders can destroy a system. If your rivals and competitors and cheat and getting ahead, what’s to stop you but your own conscience? People will flinch from violating norms initially, even if those actions are in their own self-interest, but eventually they will break. And once they break the norms have shifted, and once a few break, the rest will follow. This is the logic which drives a vicious positive feedback loop, and individuals in their rational self-interest begin to cannibalize the components of the institutions which ideally would allow all to flourish. No one wants to be the last one in a collapsing building, the sucker who asserts that the structure will hold despite all evidence to the contrary.

Deluded as most graduate students are, they by and large are driven by an ideal. Once the ideal, the illusion, is ripped apart, and eaten away from within, one can’t rebuild it in a day. Trust evolves and accumulates it organically. One can not will it into existence. Centuries of capital are at stake, and it would be best to learn the lessons of history. We may declare that history has ended, but we can’t unilaterally abolish eternal laws.

Update:

Link to original post.

New Comment
54 comments, sorted by Click to highlight new comments since:
[-][anonymous]270

Interesting stuff from the comment section.

Gregory Cochran:

Only some, not all, human populations produce any significant amount of science, and there is no sign that is about to change. Every one of those populations that does produce science has sub-replacement fertility and is also undergoing selection for lower IQ.

Gwern:

One of the more interesting ways science may die is something I call the ‘Agularity’: the average age of productivity keeps rising, so at some point either no one sane will invest that much of their lifetime into maybe being able to contribute or there will simply be too little time to do much new work before the normal decline with age starts around 40-50.

Jones has a series of papers on this aging phenomenon: “Age and Great Invention” 2006; “The burden of knowledge and the ‘death of the Renaissance man’”, 2005; “Age dynamics in scientific creativity”, 2011; etc.

Gregory Cochran: Every one of those populations that does produce science has sub-replacement fertility and is also undergoing selection for lower IQ.

What happened to the Flynn Effect?

I'd note that average intelligence is largely irrelevant here. My guess is that with selective mating, the tail has thickened at least at the top end. You know anything about this? Of course, if they always just renormalize the tests to make them gaussian, it will be hard to see.

Gwern: One of the more interesting ways science may die is something I call the ‘Agularity’: the average age of productivity keeps rising, so at some point either no one sane will invest that much of their lifetime into maybe being able to contribute or there will simply be too little time to do much new work before the normal decline with age starts around 40-50.

With better nutrition, longer lives, an increasing percentage of scientists, we should expect the age of productivity to go up.

And do a lot of people in grad school strike you as particularly sane? It's a bunch of crazy folks who like thinking all day, with the most obsessive being the most successful as the years go by. Do you think they're all going to rush off to work at Walmart, or all become lawyers?

[-]gwern150

What happened to the Flynn Effect?

Have you been following the Flynn effect research? It's dead, Jim.

My guess is that with selective mating, the tail has thickened at least at the top end.

We don't see it.

With better nutrition, longer lives, an increasing percentage of scientists, we should expect the age of productivity to go up.

None of that really follows. Better nutrition may change the baseline without affecting the arc of mental growth and decline; lives can be extended likewise; an increasing percentage of scientists doesn't help, and arguably by diminishing returns may hurt any average. Further, even if we established that any effects existed (like the former Flynn effect), that effect would have to overcome the underlying aging/delaying trend. It hasn't yet*, so why do you expect it to do so in the future?

* Note, by the way, this implies nutrition and life expectancy are pretty hopeless: we already reaped all the gains there were from iodine and cheap calories, and life expectancy increases are decelerating in the US.

Do you think they're all going to rush off to work at Walmart, or all become lawyers?

Yeah, pretty much. There's a lots of non-science jobs, you know, and with tenure disappearing, where do you think all those grad students are going to? Where did smart people go back when 1% of the population went to college? All sorts of places.

[-]satt80

What happened to the Flynn Effect?

Have you been following the Flynn effect research? It's dead, Jim.

Speaking of Jim Flynn, he has a new book out: Are We Getting Smarter? Rising IQ in the Twenty-First Century. On page 5 it says this:

Nations with data about IQ trends stand at 31. Scandinavian nations had robust gains but these peaked about 1990 and since then, may have gone into mild decline. Several other nations show persistent gains. Americans are still gaining at their historic rate of 0.30 points per year (WAIS 1995-2006; WISC 1989-2002). British children were a bit below that on Raven's from 1980 to 2008, but their current rate of gain is higher than in the earlier period from 1943 to 1980. German adults were still making vocabulary gains in 2007 at a slightly higher rate than US adults. South Korean children gained at double the US rate between 1989 and 2002 (Emanuelsson, Reuterberg, & Svensson, 1993; Flynn, 2009a, 2009b; Pietschnig, Voracek, & Formann, 2010; Schneider, 2006; Sundet, Barlaug, & Torjussen, 2004; Teasdale & Owen, 1989, 2000; te Nijenhuis, 2011; te Nijenhuis et al., 2008).

The US & UK generate something like 28% of the world's research papers; the US is the biggest source of papers and the UK the third biggest. In between is China, about which Flynn says (p. 64):

China's mean IQ is already at least the equal of developed western nations and her high rate of growth appears unstoppable.

However, his recent data for China only cover 5-6-year-olds, who apparently gained 0.2 points a year between 1984 & 2006 on the WPPSI. All in all, though, it seems like the 3 most dominant countries for science papers (accounting for about 38% of global output between them) still have a Flynn effect.

Also, here's something tangential but nonetheless interesting that I spotted:

  • China's gone from middle of the pack to a leader in publishing papers over the last 20 years
  • science in China has a serious plagiarism problem
  • scroll back up to that first graph: plagarism grows from basically nothing c. 1991 to dozens of instances in the last few years

(Maybe the PNAS paper comments on this coincidence of timing, I haven't read it yet.)

[Belated edit to fix "gows" typo.]

Without looking at the details: I regard Scandinavian countries as the 'peak' or ceiling for the US, since they have invested heavily in health and education in ways the US haven't and which we can see in other metrics like height and longevity. (The specifics don't matter here, I think, whether the US is being dragged down by minorities or this is intrinsic to wealth inequality or whatever.)

Moving on to the US: his use of averages is interesting, but this doesn't tell me too much - 2006 was a while ago, and mightn't there be falls or plateaus? And on what base is this increase coming, and how is it distributed? If the Flynn effect is operating only in lower IQ ranges, as has long been suggested since stuff like better nutrition would then handily account for it, this has no real relevance to questions of cutting-edge science, which is done pretty much solely by >120 IQ ranges.

The British example really confuses me, because I was under the distinct impression that one of Flynn's researches has been on how the British Flynn effect has stopped and reversed (specifically, "Requiem for nutrition as the cause of IQ gains: Raven's gains in Britain 1938–2008" Flynn 2009).

China is an interesting example and a potential counter-example, but my basic question is this: the age frontier has been increasing over the 20th century, the century in which another very smart East Asian country grew from poverty to one of the richest nations around and invested commensurate amounts into R&D, winning 16 Nobels (including non-STEM). I mean Japan, of course.

(Chinese plagiarism is a serious concern, as are general cultural critiques of 'unoriginality', but before I rested any claims on that, I'd want to know whether there were similar issues in America or Europe or Japan especially - whether there really is a difference in kind, or just a difference in stage of development. Plagiarism/unoriginality may just be teething problems, if you will.)

So if Japan's own development & Flynn effect did not - apparently - reverse the age frontier, why do you expect China to reverse it?

[-]satt20

Moving on to the US: his use of averages is interesting, but this doesn't tell me too much - 2006 was a while ago, and mightn't there be falls or plateaus?

It's always possible, but I'm unware of subsequent standardization samples showing that.

And on what base is this increase coming, and how is it distributed?

Not sure, I haven't seen US data for that in Flynn's book. I've only seen details about the low vs. high ends of the distribution for Britain.

The British example really confuses me, because I was under the distinct impression that one of Flynn's researches has been on how the British Flynn effect has stopped and reversed (specifically, "Requiem for nutrition as the cause of IQ gains: Raven's gains in Britain 1938–2008" Flynn 2009).

(That paper's the "Flynn, 2009a" cited in the snippet I quoted, incidentally.) Looking more closely into Flynn's full discussion of the British data in the book clarifies things. The data come from two Raven's Matrices tests, the CPM and the SPM. The CPM covered ages 5-11 and the SPM ages 7-15 (pages 45 & 46). On the CPM the Flynn effect was faster from the 1980s onwards but on the SPM it was slower; Flynn detected that this was an age effect, and so compared the CPM & SPM over their common age range of 7-11. Over that range both tests had an accelerating Flynn effect (p. 47), and I assume this is where Flynn's conclusion that the UK has a stronger recent Flynn effect comes from.

However, considering the SPM alone over its entire age range, there was a net slowing down of the Flynn effect (0.15 points per year for 1979-2008 vs. 0.23 points per year for 1938-1979), because at the highest ages tested the Flynn effect went into reverse (a 1.9 point IQ drop from 1979 to 2008 for 14- & 15-year-olds). Presumably it's this data that suggested a reversal of the UK Flynn effect to you, and your inference differed from Flynn's because the two of you looked at different slices of the data. Flynn focused on younger children, who have rising IQs, and you remembered the data from older children, who have falling IQs. (It's a shame there are no recent adult data to resolve which trend the grown-ups are following.)

China is an interesting example and a potential counter-example, but my basic question is this: the age frontier

So if Japan's own development & Flynn effect did not - apparently - reverse the age frontier, why do you expect China to reverse it?

I don't disagree with you about the age frontier. The part of your comment that brought me up short was "Have you been following the Flynn effect research? It's dead, Jim.", because it didn't jibe with what I remembered from Flynn's book. The rest of your comment looked good to me.

As for plagiarism in Chinese science, I suspect it probably is just a teething problem that'll work itself out in a few decades. My intent wasn't to roll out a stock stereotype about Chinese culture being unoriginal but simply to note a sideline the paper might touch upon.

However, considering the SPM alone over its entire age range, there was a net slowing down of the Flynn effect (0.15 points per year for 1979-2008 vs. 0.23 points per year for 1938-1979), because at the highest ages tested the Flynn effect went into reverse (a 1.9 point IQ drop from 1979 to 2008 for 14- & 15-year-olds). Presumably it's this data that suggested a reversal of the UK Flynn effect to you, and your inference differed from Flynn's because the two of you looked at different slices of the data. Flynn focused on younger children, who have rising IQs, and you remembered the data from older children, who have falling IQs. (It's a shame there are no recent adult data to resolve which trend the grown-ups are following.)

Hm, maybe I'm missing something on how the tests interact, but if the older range up to 2008 on the SPM was falling, doesn't that tell you how the adults are going to turn out simply because they are closer to being adults than the younger counterparts?

(Guess I'll have to read his book eventually.)

[-]satt00

Hm, maybe I'm missing something on how the tests interact, but if the older range up to 2008 on the SPM was falling, doesn't that tell you how the adults are going to turn out simply because they are closer to being adults than the younger counterparts?

A priori, it does seem like the older kid trend should be more relevant to adults than the younger kid trend.

However, British IQ gains might have a V-shaped relationship with age: solid gains in younger kids, lesser gains (or indeed losses) in teenagers, and a return to higher gains in adulthood. As written that probably sounds like a wilful disregard of Occam's razor, but there is some precedent.

I went back to Flynn's original 1987 article "Massive IQ Gains in 14 Nations" and looked up Great Britain. The only Matrices results seem to be for the SPM, but as well as the familiar 1938-1979 results, there's an adult IQ gain estimate. Flynn got it by comparing a 1940 sample of militiamen (average age 22) at a WW2 training depot to the 15½-year-olds in the 1979 sample, adjusting the gain estimate to partially offset the teenagers' age disadvantage.

Comparing the adult(ish) gains to the child gains reveals something like a V-shaped trend: ages 8-11 gained 0.25 points/year, ages 12-14 gained 0.11 points/year, and the 15½-year-olds outpaced the militiamen by 0.18 points/year. In this case, the older kids' gain rate was no better as an estimate of the (pseudo-)adult rate than the younger kids' rate.

Have you been following the Flynn effect research? It's dead, Jim.

Could you amplify that? It's stopped happening, or it never did?

It's easy to point fingers at a very sick subset of scientific endeavors - biomedical research. The reasons it is messed up and not very productive are myriad. Fake and non-reproducible results that waste everyone's time are one facet of the problem. The big one I observed was that trying to make a useful tool to solve a real problem with the human body is NOT something that the traditional model can handle very well. The human body is so immensely complex. This means that "easy" solutions are not going to work. You can't repair a jet engine by putting sawdust in the engine oil or some other cheap trick, can you? Why would you think a very small molecule that can interact with any one of tens of thousands of proteins in an unpredictable manner could fix anything either? (or a beam of radiation, or chopping out an entire sub-system and replacing it with a shoddy substitute made by cannibalizing something else, or delivering crude electric shocks to a huge region. I've just named nearly every trick in the arsenal)

Most biomedical research is slanted towards this "cheap trick" solution, however. The reason is because the model encourages it. University research teams usually consist of a principle investigator and a small cadre of graduate students, and a relatively small budget. They are under a deadline to come up with something-anything useful within a few years, and the failures don't receive tenure and are fired. Pharmaceutical research teams also want a quick and cheap solution, generally, for a similar reason. Most of the low hanging fruit - small molecule drugs that are safe and effective - has already been plucked, and in any case there is a limit to the problems in biological systems that can actually be fixed with small molecules. If a complex machine is broken, you usually need to shut it off and replace major components. You are not going to be able to spray some magic oil and fix the fault.

For example, how might you plausible cure cancer? Well, what do cancer cells share in common? Markers on the outside of the cells? Nope, if there were, the immune system would usually detect them. Are the cells always making some foreign protein? Nope, same problem. All tumors share mutated genes, and thus have mRNAs present in the cells that you can detect.

So how might you exploit this? Somehow you have to build a tool that can get into cells near the tumor and detect the ones with these faulty mRNAs(and kills them). Also, this tool needs to not affect healthy cells.

If you break down the components of the tool, you realize it would have to be quite complex, with many sub-elements that have to be developed. You cannot solve this problem with 10 people and a few million dollars. You probably need many interrelated teams, all of whom are tasked with developing separate components of the tool. (with prizes if they succeed, and multiple teams working on each component using a different method to minimize risks)

No one is going to magically publish a working paper in Nature tomorrow where they have succeeded in such an effort overnight. Yet, this is basically what the current system expects. Somehow someone is going to cure cancer tomorrow without there being an actual integrated plan, with the billions of dollars in resources needed, and a sound game plan that minimizes risk and rewards individual successes.

Professors I have pointed this out to say that no central agency can possibly "know" what a successful cancer cure might look like. The current system just funds anyone who wants to try anything, assuming they pass review and have the right credentials. Thus a large variety of things are tried. I don't see it. I don't think there is a valid solution to cancer that can be found with a small team just trying things with a million or 2 of equipment, supplies, and personnel.

Growing replacement organs is a similar endeavor. Small teams have managed to show that it is viable - but they cannot actually solve the serious problems because they lack the resources to go about it in a systematic and likely to succeed way. While Wake Forest has demonstrated years ago that they can make a small heart that beats, there isn't a huge team of thousands systematically attacking each element of the problem that has to be solved to make full scale replacement hearts.

One final note : this ultimately points to gross misapplication of resources. Our society spends billions to kill a few Muslims who MIGHT kill some people violently. It spends billions to incarcerate millions of people for life who individually MIGHT commit some murders. It spends billions on nursing homes and end of life care to statistically extend the lives of millions by a matter of months.

Yet real solutions to problems that kill nearly everyone, for certain, are not worth the money to solve them in a systematic way.

The reason for this is lack of rationality. Human beings fear emotionally extremely rare causes of death much more than extremely likely, "natural" causes. They fear the idea of a few disgruntled Muslims or a criminal who was let out of prison murdering them far more than they fear their heart suddenly failing or their tissues developing a tumor when they are old.

The institution of medicine, defined as "understanding the human body well enough to, from basic principles, directly and intentionally repair diagnosed faults", only barely exists, and it is called surgery.

The historic division between medicine (as descended from folk remedies and alchemy) and surgery (as descended from the unsubtle craft of closing wounds and amputating limbs) is illustrative here. Medicine, by definition, is holistic. It descends from folk remedies, alchemy, and enchanted unguents. It has only recently and intermittently shown the slightest interest in drug mechanisms, and even that only to the extent that the analysis of drug mechanisms facilitates the development of new and profitable drugs. Medicine has never been about anything /but/ "adding small molecules to the oil", though it has been far more prestigious then surgery for about a century, since the late 19th century discoveries of narcotics, antibiotics, and vaccines. [Prior to this surgeons were considered far more reliable within their area of expertise, although neither had the degree of professionalization and societal status that they enjoy today.] You make the argument, and I'm inclined to agree, that medicine may very well be playing itself out - that the model that grabbed all the low hanging fruit there is more or less obsolete.

The future of medicine isn't medicine at all. It's nano-surgery. Though I suspect there will be a big turf war between medical professionals and surgical professionals as the medical professionals seek to redefine themselves as the ones implementing the procedures that actually work.

Meh, another buzzword. I actually don't think we'll see nanosurgery for a very long time, and we should be able to solve the problem of "death" many generations of tech before we can do nano-surgery.

Think about what you actually need to do this. You need a small robot, composed of non-biological parts at the nanoscale. Presumably, this would be diamondoid components such as motors, gears, bearings, etc as well as internal power storage, propulsion, sensors, and so on. The reason for non-biological parts is that biological parts are too floppy and unpredictable and are too difficult to rationally engineer into a working machine.

Anyways, this machine is very precisely made, probably manufactured in a perfect vacuum at low temperatures. Putting it into a dirty liquid environment will require many generations of engineering past the first generation of nanomachinery that can only function in a perfect vacuum at low temperatures. And it has to deal with power and communication issues.

Now, how does this machine actually repair anything? Perhaps it can clean up plaques in the arteries, but how does it fix the faulty DNA in damaged skin cells that cause the skin to sag with age? How does it enter a living cell without damaging it? How does it operate inside a living cell without getting shoved around away from where it needs to be? How do it's sensors work in such a chaotic environment?

I'm not saying it can't be done. In fact, I am pretty sure it can be done. I'm saying that this is a VERY VERY hard engineering problem, one that would require inconceivable amounts of effort. Using modern techniques this problem may in fact be so complex to solve that even if we had the information about biology and the nanoscale needed to even start on this project, it might be infeasible with modern resources.

If you have these machines, you have a machine that can create other nanomachines, with atomically precise components. Your machine probably needs a vacuum and low temperatures, as before. Well, that machine can probably make variants of itself that are far simpler to design than a biologically compatible repair robot. Say a variant that instead of performing additive manufacturing at the nanoscale, it can tear down an existing object at the nanoscale and inform the control machinery about the pattern it finds.

Anyways, long story short : with a lot less effort, the same technology needed for nanosurgery to be possible could deconstruct preserved human brains and build computers powerful enough to simulate these brains accurately and at high speed. This solves the problem of "death" quite neatly : rather than trying to patch up your decaying mass of biological tissue with nanosurgery, you get yourself preserved and converted into a computer simulation that does not decay at all.

I think you may have misunderstood me. By "nanosurgery" I meant not solely Drexlerian medical nanobots (though I wasn't ruling them out). Any drug whose design deliberately and intentionally causes specific, deliberate, and intentional changes to cell-level and molecular-level components of the human body, deliberately and consciously designed with a deep knowledge of the protein structures and cellular metabolic pathways involved, qualifies as nanosurgery, by my definition.

I contrast nanosurgery: deliberate, intentional action controlling the activity or structure of cellular-components - with medicine: the application of small molecules to the human metabolism to create a global, holistic effect with incomplete or nonexistent knowledge of the specific functional mechanisms. Surgery's salient characteristic is that it is intentional and deliberate manipulation to repair functionality. Medicine's salient characteristic is that it is a mapping of cause [primarily drug administration] to effect [changes in reported symptoms], with significantly reduced emphasis on the functional chain of causation between the two. As you said above, medicine is defined as "cheap tricks". That's what it does. That's what it's always been. When you're doing something intentional to a specific piece of a human to modify or repair it's functionality, that's surgery, whether it's done at the cellular or molecular level (nanosurgery) or at the macroscopic level (conventional surgery).

Prior to about 20 years ago, the vast majority of drugs were developed as medicine. Nowadays, more and more attempts at drug design are at least partially attempts to engineer tools for nanosurgery, per this definition. This is a good thing, and I see the trend continuing. If Drexlerian medical nanobots are possible at all, they would represent the logical endpoint of this trend, but I agree they represent an incredible engineering challenge and they may or may not end up being an economical technology for fixing broken human bodies.

Again, this is one of those approaches that sounds good at a conference, but when you actually sit there and think about it rationally, it shows it's flaws.

Even if you know exactly what pathway to hit, a small molecule by definition will get everywhere and gum up the works for many, many other systems in the body. It's almost impossible not to. Sure, there's a tiny solution space of small molecules that are safe enough to use despite this, but even then you're going to have side effects and you still have not fixed anything. The reason the cells are giving up and failing as a person ages is that their genetic code has reached a stage that calls for this. We're still teasing out the exact regulatory mechanisms, but the evidence for this is overwhelming.

No small molecule can fix this problem. Say one of the side effects of this end of life regulatory status is that some cells have intracellular calcium levels that are too high, and another set has them too low. Tell me a small molecule exists out of the billions of possibilities that can fix this.

DNA patching and code update is something that would basically require Drexelerian nanorobotics, subject to the issues above.

Methods to "rollback" cells to their previous developmental states, then re-differentiate them to functional components for a laboratory grown replacement organ actually fix this problem.

For some reason, most of the resources (funding and people) is not pouring into rushing Drexelerian nanorobotics or replacement organs to the prototype stage.

Great analysis. A lot of people think that science follows an inevitable and predetermined progression of truths - a "tech tree" determined by the cosmos - but that's clearly not the case, especially in the field of medicine.

Sometimes I rant about how computer vision's fatal flaw is that it is intellectually descended from Computer Science, and so the field looks for results conceptually similar to the great achievements of CS - fast algorithms, proofs of convergence, complexity bounds, fully general frameworks, etc. But what people should really be doing is studying images - heading out into the world and documenting the visual structures and patterns they observe.

They are under a deadline to come up with something-anything useful within a few years

For better or worse, being useful isn't something that's important for academic biology research. If you discover a new biochemical pathway, you get published whether or not the knowledge helps anybody to do something useful.

No one is going to magically publish a working paper in Nature tomorrow where they have succeeded in such an effort overnight. Yet, this is basically what the current system expects.

That's I don't see why someone who would develop something that would work as one of the components of the tool wouldn't get published in Nature.

Our society spends billions to kill a few Muslims who MIGHT kill some people violently.

That's a very naive way to look at things. Killing a few Muslims who MIGHT kill some people violently isn't the only goal of the various wars. As long as you pretend it is things are hard to understand.

Most biomedical research is slanted towards this "cheap trick" solution, however. The reason is because the model encourages it.

I'm pretty sure this also applies to machine learning research. See this.

[-]TimS-10

I totally agree that basic research is underfunded. In terms of constructive criticism, the issue of defense spending is isomorphic to your war-on-terror point, but is much less controversial. I might edit the post to remove this just to avoid a controversy different than your main point.

You missed the boat completely. Not modding down because this is an easy cognitive error to make, and I just hit you with a wall of text that does need better editing.

I just said that the model of "basic research" is WRONG. You can't throw billions at individual groups, each eating away a tiny piece of the puzzle doing basic research and expect to get a working device that fixes the real problems.

You'll get rafts of "papers" that each try to inform the world about some tiny element about how things work, but fail miserably in their mission for a bunch of reasons.

Instead you need targeted, GOAL oriented research, and a game plan to win. When groups learn things, they need to update a wiki or some other information management tool with what they have found out and how certain they are correct - not hide their actual discovery in a huge jargon laden paper with 50 references at the end.

[-]TimS10

Fair enough - you don't believe in research that isn't directed at a particular problem (aka basic research). That's totally independent of your criticism of "cheap trick" biomedical research - which is a structural function of the fact that companies who make their money providing "cheap tricks" are the ones doing most of the funding. And I stand by my assertion that your references to other irrational funding priorities is a massive distraction from your point.

In general, I think we are a lot farther from solving the problem than you seem to acknowledge. It isn't that someone knows how to cure/fix cancer but isn't being funded. It's that Science as a whole has no idea what might work.

The method I described WILL work. The laws of physics say it will. Small scale experiments show it working. It isn't that complicated to understand. Bad mRNA present = cell dies. All tumors, no matter what, have bad mRNAs, wherever they happen to be found in the body.

But it has to be developed and refined, with huge resources put into each element of the problem.

Here, specifically, is the difference between my proposed method and the current 'state of the art'. Ok, so the NIH holds a big meeting. They draw a massive flow chart. Team 1,2,3 - your expertise is in immunology. Find a coating that will evade the immune system and can encapsulate a large enough device. Million dollar prize to the first team that succeeds. Here are the specific criteria for success.

Team 4 - for some reason, health cells are dying when too many copies of the prototype device are injected. Million dollars if you can find a solution to this problem within 6 months.

Team 5 - we need alternate chemotherapy agents to attach to this device.

Team 6 - we need a manufacturing method.

Once a goal is identified and a team is assigned, they are allocated resources within a week. Rather than awarding and penny pinching funds, the overall effort has a huge budget and equipment is purchased or loaned between groups as needed. The teams would be working in massive integrated laboratories located across the country, with multiple teams in each laboratory for cross trading of skills and ideas.

And so on and so forth. The current model is "ok, so you want to research if near infrared lasers and tumor cells will work. You have this lengthy list of paper credentials, and lasers and cancer sound like buzzwords we like to hear. Also your buddies all rubber stamped your idea during review. Here's your funds, hope to see a paper in 2 years"...

No one ever considers "how likely are actually going to be better than using high frequency radiation we already have? How much time is this really going to buy a patient even if this is a better method?".

The fact is, I've looked at the list of all ongoing research at several major institutions, and they are usually nearly all projects of similarly questionable long term utility. Sure, maybe a miracle will happen and someone will discover and easy and cheap method that works incredibly well that no one ever thought would work.

But a molecular machine, composed of mostly organic protein based parts, that detects bad mRNAs and kills the cell is an idea that WILL work. It DOES work in rats. More importantly, it is a method that can potentially hunt down tumor cells of any type, no matter where they are hiding, no matter how many metastases are present.

Anyone using rational thought would realize that this is an idea that actually is nearly certain to work (well, in the long run, not saying a big research project might not hit a few showstoppers along the way).

And there is money going to this idea - but it's having to compete with 1000 other methods that don't have the potential to actually kill every tumor cell in a patient and cure them.

No one ever considers "how likely are actually going to be better than using high frequency radiation we already have? How much time is this really going to buy a patient even if this is a better method?".

You can't know such things beforehand. That's why they call it research. Look at a central technique of molecular biology like the usage of monoclonal antibodies.

The funding to develop the technique came from cancer research. People hoped it would be a way good way to kill cancer cells. They didn't had the success with cancers cells that they hoped for. On the other hand molecular biology would be a lot less productive if we didn't have monoclonal antibodies.

Doing basic research with near infrared lasers and cancer is similar.

And there is money going to this idea - but it's having to compete with 1000 other methods that don't have the potential to actually kill every tumor cell in a patient and cure them.

That's false. Even today some cancer patients get cured from their cancer by taking big pharma drugs.

But a molecular machine, composed of mostly organic protein based parts, that detects bad mRNAs and kills the cell is an idea that WILL work. It DOES work in rats.

If there's enough funding for such an idea to make it work in rats in the current system, doesn't that negate your central point? If people in academia make it works in rats, taking it from working in rats to working in humans is the job of biotech or bigpharma. If bigpharma thinks that such an idea is really promising they could invest billions into the idea and attack the problem systematically.

[-][anonymous]250

The first graph is misleading. The proper metric is the proportion of fraud/error/duplication among all articles, or alternatively the proportion of scientists who have been accused of fraud/error/duplication at least once.

The second graph is better, except it's unclear if the vertical axis means, e.g., 0.005% or 0.5%.

Neither graph indicates its origin, the population of articles studied, or anything else I could use to evaluate them.

The vertical axis on the second graph specifies that it's in percent, on the label, so that would at least be 0.005%.

Neither graph indicates its origin

Isn't that the "new paper in PNAS" linked in the post?

[-][anonymous]00

How am I supposed to know where the graphs come from? The paper is conveniently behind a paywall. The abstract does say something about

2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012

but this isn't the whole story, since the data on the graph goes back to 1975.

[-]cata230

Looking at the article and the abstract, it seems equally plausible that we have just gotten a lot better recently at detecting fraud and plagiarism and error. Is there any reason that the study excludes that possibility?

[-][anonymous]70

this can be tested. are old papers being detected as fraud?

[-]satt50

Wish I'd thought to ask this question myself. The paper doesn't have the data to answer this, but with a little work one can search PubMed for retractions. Interestingly, I get 2,418 hits while the paper mentions only 2,047.

I grabbed the 441 retraction records for 2011 (in MEDLINE format for easier processing) and slapped together a script to extract the references. There were 458 (some retractions are for multiple publications) and my script pulled 415 publication years from them. Some papers had no year because they were referenced only by DOI and the DOI didn't include a year; some papers had two publication years because they had a formal publication date and a DOI with a year (presumably these papers are the ones that appear online before getting a formal volume number & page allocation).

A tally of the original publication years for those 415 retractions:

  • 1998: 2
  • 1999: 8
  • 2000: 7
  • 2001: 13
  • 2002: 20
  • 2003: 11
  • 2004: 13
  • 2005: 33
  • 2006: 33
  • 2007: 31
  • 2008: 32
  • 2009: 66
  • 2010: 109
  • 2011: 37

Looks like really old papers aren't being retracted, which surprises me! I would've expected at least a handful from the 1980s, but last year's retractions stop dead at 1997. Somehow I doubt the shitty pre-1998 papers have all already been retracted.

[Edited to fix list and 2003 figure.]

Other alternative possibilities:

  • Scientists have gotten worse at hiding their fraud, or at judging when to be fraudulent.
  • The total number of scientists and publications increases over time, so the total amount of fraud also increases (the original paper is paywalled, but the abstract doesn't say that they correct for that)
  • The proportion of papers published each year that enters PubMed rises with time. PubMed was created in 1996. I don't know when the database behind it was created, but probably later than 1950. Then the older articles in the database are the ones that were best remembered. Insignificant articles, and probably ones that were retracted or shown to be wrong, simply aren't in the database.
  • Perhaps there is a way for articles to be removed from PubMed if retracted (?)
  • Many retractions are done between the article submission but before publication (because the authors or reviewers notice issues), and so before they enter PubMed. Some fraud thus goes unreported in PubMed.
  • The Internet's dissemination and archival of information makes it easier to discover fraud and less likely the discovery will be hushed up or forgotten

Even if that's true, creating the perception that there are more people who cheat, is likely to encourage more people to cheat.

Is there a reason to believe we've got 3-10x better at detecting fraud in the past decade?

Given enough eyeballs, all bugs are shallow.

Not literally true, but I wouldn't be surprised that the expansion of access to electronic articles, and the expansion of people with access to see them, has resulted in a 3-10x greater read rate for the important articles.

Well, is there a reason to believe scientists have become 3x-10x more fraudulent in the past decade?

Possible reasons for a scientist to be fraudulent is glory and fierce competition (which is usually for jobs and grants). Both factors existed prominently in the past as well as nowadays. On the other hand, as buybuydandavis points out, there are good reasons to believe that we've become better at spotting suspect scientific articles.

Possible reasons for a scientist to be fraudulent is glory and fierce competition (which is usually for jobs and grants).

I like to include 'money' in lists regarding motives for fraud too. There is plenty of that floating about in (certain kinds of) science.

Let's go back and look at the source article one more time: "PubMed references more than 25 million articles relating primarily to biomedical research published since the 1940s. A comprehensive search of the PubMed database in May 2012 identified 2,047 retracted articles, with the earliest retracted article published in 1973 and retracted in 1977."

So over 99.99% of articles aren't retracted. Lets say the retracted ones are a tip of the iceberg and the real situation is ten times worse. That makes it 99.9% accurate.

Aside from the sensationalism, these results are a stunning and unequivocal endorsement that the scientific system works.

Aside from the sensationalism, these results are a stunning and unequivocal endorsement that the scientific system works.

2047 articles retracted (for any reason) out of 25 million = 0.008%. (Edit: I mistyped the figure as 5 million instead of 25 million, but the percentage was correct.) 21.3% of that was retracted due to error, i.e. 0.0017% of all published papers later admitted an error that made the paper worthless.

That looks like massive underreporting. I cannot believe that there weren't several orders of magnitude more retraction-worthy cases, which were not retracted. In fact, this is so massive that I would characterize it as unequivocal endorsement that the scientific system is broken, because nobody is willing to admit to their (honest, non fraudulent) errors, and few publish failed reproductions.

Analyzing the few papers that were retracted will surely have big confounding factors, because it's a preselected subset of papers that is already abnormal in some way (why were they retracted? Did a third party discover the fraud? Is this selection for the least competent fraudsters?)

What rate of retraction would provide evidence that the system is working?

I don't have good data on the subject, and I'm not well-calibrated. But my expectation of the rate of severe errors, made by well-meaning people in a complex endeavor, is for at least 1% retractions (3.5 orders of magnitude above the current 0.0017%). That would still be less than one lifetime retraction per scientist on average.

And another thought: if scientists are really so thorough as to achieve a very low rate of major errors, they are probably overspending. It would be more efficient (fewer false negatives in self-vetting and less time spent self-vetting) to be bolder in publishing and rely more on vetting and reproduction by others.

Of course, the decline of science-as-an-institution isn't just marked by overt cheating. A safer and arguably more prevalent method is to game the publication system (ie by aligning your beliefs and professional contacts with powerful factions of reviewers), crank out many unrevealing publications, and make small contributions to hot fields rather than large ones in fields that are less likely to get you tenure.

Overall we'd see a lower signal-to-noise ratio in science, but this is hard to quantify. It's tough to call a discipline diseased until decades afterwards.

Robbing the social commons has consequences.

Robbing any commons has consequences, otherwise it wouldn't be a commons problem.

Robbing things in general has consequences - but it's harder to detect the robbery of social trust than the robbery of a sofa.

Maybe I could fix this problem by sneaking into buildings, removing the sofas, and then incinerating them. That way, finding that a sofa has gone missing would then be weaker evidence that it has been stolen and stronger evidence that it has been incinerated. That would make it increasingly difficult to detect sofa robbery, hopefully putting it on par with social trust robbery detection.

That would make things worse, not better.

In other words, mankind would put more resources into sofa production and detection of any variety of theft, leaving less to put into paperclip production. Do you know how they attach upholstery to couches? Staples.

Actually, it would be evidence that your sofa has been stolen and you have no chance of getting it back.

Holy shit, even today only 1 in 10,000 articles are retracted for fraud.

I am assuming these retracted articles are a tiny fraction of the actual number/% of articles with fraud, and such a tiny fraction as to not give reliable evidence for the proportion increasing; so the graph's data isn't particularly useful.

Re: talking about problems in the biochemistry field in general:

I'm sure that there are lots of problems, and I don't mean to invalidate anyone's points, but on the bright side, genetic sequencing has been getting faster and cheaper FASTER than moore's law predicts. http://www.forbes.com/sites/techonomy/2012/01/12/dna-sequencing-is-now-improving-faster-than-moores-law/

We're ALMOST to the point where we do full-genome sequencing on a tumor biopsy to adjust a patient's chemo drugs. The results unfortunately haven't been reproducible yet, so it's not quite ready for clinical practice, but by golly we're close. It currently costs about $4,000 per genome, and we're less than 10 years after the Human Genome Project which was 13 years and 3 billion dollars for a single genome. One company claims its soon-to-be-released machine will do it in 4-5 days for $900.

Thats mostly engineering, not science.

Fair point, though the line's pretty blurry in "biotechnology". (Typo: I meant "biotechnolgy" instead of "biochemistry"). What I mean is that people are complaining that the field is doing a lot of "quick-fix" solutions to problems, and I'm saying - "hey, some of those 'quick-fixes' look pretty promising."

Neuroskeptic's take on this is interesting, as usual.

Won't reality eventually sort this out?

Essentially what is being said here is that "the scientific establishment in the West (mostly the USA) is becoming dysfunctional. If the current trend continues, enough science will be wrong or fraudulent that no forward progress is made at all."

However, science isn't just an abstract idea with intangible moral rules. If scientists fake results on a large scale, they will cease discovering useful new ideas or create anything that is objectively better than what Western society currently has. This will have consequences as governed by the one entity that can't be befuddled - the actual universe. Western machinery and medicine will become relatively less efficient compared to competitors because it does not keep getting improved. Over enough of a period of time, this will be deleterious to the entire civilization.

As long as their are competing civilizations (you can divide the world up into other sub-groups, although of course there are many inter-relationships) such as Eastern Europe, Asia, the South Americas, etc then over the long term (centuries) this will simply give these competing civilizations an opportunity to surge ahead. Overall global progress does not stop.

A broader theme here : I'm saying that from the very dawn of rational thought and the scientific method, combined with a method to record the progress made and prevent information loss (the printing press), the overall trajectory is unstoppable. Various human groups competing among each other will continue to increase their control over environment, ultimately resulting in the development of tools that allow true mastery. (AIs, nano-machinery that can self replicate and quickly pattern everything, etc)

It's sort of a popular idea to talk about ways that this could somehow not happen, but short of a large scale erasure of existing information (aka nuclear weapons or very large meteors erasing the current "state" of the global information environment) it's hard to see how anything else could ultimately happen.