I'm woefully underinformed on this topic, but this doesn't seem good at all:

ROTTERDAM, THE NETHERLANDS—Locked up in the bowels of the medical faculty building here and accessible to only a handful of scientists lies a man-made flu virus that could change world history if it were ever set free.

The virus is an H5N1 avian influenza strain that has been genetically altered and is now easily transmissible between ferrets, the animals that most closely mimic the human response to flu. Scientists believe it's likely that the pathogen, if it emerged in nature or were released, would trigger an influenza pandemic, quite possibly with many millions of deaths.

In a 17th floor office in the same building, virologist Ron Fouchier of Erasmus Medical Center calmly explains why his team created what he says is "probably one of the most dangerous viruses you can make"—and why he wants to publish a paper describing how they did it. Fouchier is also bracing for a media storm. After he talked toScienceInsider yesterday, he had an appointment with an institutional press officer to chart a communication strategy.

Fouchier's paper is one of two studies that have triggered an intense debate about the limits of scientific freedom and that could portend changes in the way U.S. researchers handle so-called dual-use research: studies that have a potential public health benefit but could also be useful for nefarious purposes like biowarfare or bioterrorism.

The other study—also on H5N1, and with comparable results—was done by a team led by virologist Yoshihiro Kawaoka at the University of Wisconsin, Madison, and the University of Tokyo, several scientists toldScienceInsider. (Kawaoka did not respond to interview requests.) Both studies have been submitted for publication, and both are currently under review by the U.S. National Science Advisory Board for Biosecurity (NSABB), which on a few previous occasions has been asked by scientists or journals to review papers that caused worries.

NSABB chair Paul Keim, a microbial geneticist, says he cannot discuss specific studies but confirms that the board has "worked very hard and very intensely for several weeks on studies about H5N1 transmissibility in mammals." The group plans to issue a public statement soon, says Keim, and is likely to issue additional recommendations about this type of research. "We'll have a lot to say," he says

I feel as though I ought provide more commentary instead of just an article dump, but I feel more strongly than that that what I have to say would be obvious or stupid or both, so.

New Comment
26 comments, sorted by Click to highlight new comments since:

This is extremely unlikely to be an existential risk, and working with such viruses helps a lot to understanding how to combat similar viruses. It is very important that they be very careful that they don't get out of the lab, but the level of precautions generally taken for this sort of thing is extremely high.

The important thing about this research is that it shifts our estimate of the likelihood of H5N1 human-to-human transmission upward (for all intents and purposes, ferrets can be considered little humans when talking about flu). Some previous research (theoretical, not experimental) indicated that it was nearly impossible for this to occur, requiring multiple "high energy" (i.e. low-probability) mutations. These guys seem to have done it relatively easily, which is pretty horrifying.

It is indeed unlikely that the particular virus bred by the Dutch lab will get out (it's a BSL-3 lab), but that's not the danger being discussed by NSABB (and many others). The concern is that the knowledge of how to produce such a virus, if published, could be used for bioterrorism.

Whether doing or publishing this type of research is worth it from an x-risk perspective is a tough issue, but that such a virus could be produced at all is indisputably bad news.

The concern is that the knowledge of how to produce such a virus, if published, could be used for bioterrorism.

My concern is that publishing that it was successfully done significantly narrows the search space for people looking to commit bioterrorism, even if they don't publish how it is done.

That's an excellent point!

[-][anonymous]60

Security through obscurity is rarely a good plan. If this is possible for the good guys to do, then its possible for the bad guys to do. Bear in mind that even with the information on how to make this virus human transmitable available, it is probably rather difficult without the appropriate facilties to reproduce, so that rules out random lunatics, and realistically its simply not sensible for anyone else to do so.

Bear in mind that even with the information on how to make this virus human transmitable available, it is probably rather difficult without the appropriate facilties to reproduce, so that rules out random lunatics, and realistically its simply not sensible for anyone else to do so.

I don't find this terribly comforting, given that I don't assume that everyone with an interest in biological warfare lacks the funding to create the appropriate facilities. What I do find comforting is the strong suspicion that neither the researchers nor the advisory board want to make "information on how to make this virus human transmitable" readily available.

I doubt that the intentions or security measures of the researchers are being given a fair shake when there's such a Hollywood-esque story to be written.

Bear in mind that even with the information on how to make this virus human transmitable available, it is probably rather difficult without the appropriate facilties to reproduce, so that rules out random lunatics, and realistically its simply not sensible for anyone else to do so.

What about this guy?

Bruce Edwards Ivins was an American microbiologist, vaccinologist, senior biodefense researcher at the United States Army Medical Research Institute of Infectious Diseases (...) and the key suspect in the 2001 anthrax attacks.

[-][anonymous]00

Mmm. Well I suppose you can never rule out a commited lunatic having access to the actual bacteria and choosing to spread it, although Ivins (assuming we accept he was the guilty party) deliberately targeted people- this illness wouldn't do that, it would spread very quickly, killing a good proportion of them. i'm not saying that there aren't people who might want to do that, but the intersection of people who want to do that and people with access to it is hopefully rather low- after all, the anthrax attack was an exception.

Apparently the airborne virus was not lethal, and the lethal version only killed when injected in high doses into the ferrets. The paranoid might imagine this is an effort to deceive so as to "put the genie back in the bottle," but given the priors on the claimed findings, this looks like it was a false alarm.

Even with this added evidence, Keim suspects that if the NSABB would have voted against publishing the papers if they had returned in their original form. As it is, they had been revised. They do not contain any new data, but they have been rephrased to clarify some ambiguities. Fouchier said that he was granted more space by Science to explain the public health benefits of the research. Critically, while the original paper focused on the virus’s transmissibility, the new version clarifies the fact that the airborne mutants do not kill the ferrets they infect. “There was a misconception within the NSABB about the lethality of our virus,” says Fouchier. “The information was in the original but it was not as clear or explicit as it could have been.” Keim joked, “I suspect Ron will write papers differently for the rest of his life.” But that information also took its time to emerge into the public sphere. For months, the media and the public were labouring under the impression that the airborne mutants were just as deadly as their wild counterparts. This mistaken belief was only corrected in March. Other details may also be incorrect. It was widely reported that Fouchier’s virus is five mutations away from its wild cousins, but at the end of the press conference, he noted that he never said that. He only described a “handful” of mutations. Presumably, the actual details will emerge when the paper does. This raises an obvious question: why didn’t Fouchier, or anyone else in the know, correct the misconceptions as they were spreading? Fouchier says that he did not feel able to comment on the details while it was under review by Science, and then later by the NSABB. “We have to be really careful about how we communicate the details of the studies,” he says. “Only when I had permission to talk about this, and was encouraged to do so by the journal and the NIH, did we do so.”

[-][anonymous]20

Link threads usually have [link] in their title as in:

[LINK] Normal title here.

Responsible virologists should publish fake research purportedly showing how to create existential risk viruses so as to hide behind a body of lies the papers which give genuine insight into how to actually make such viruses.

It strikes me that having scientists deliberately lie as policy would have terribly, terribly bad secondary effects - enough people distrust sound science as it is. Therefore, I very much hope you were being sarcastic or something.

I was being serious, although if (and I have no reason to doubt) Logos01 is correct then my idea is probably a bad one.

enough people distrust sound science as it is.

True but not enough people distrust unsound science or political advocates who pose as scientists.

I was being serious, although if (and I have no reason to doubt) Logos01 is correct then my idea is probably a bad one.

enough people distrust sound science as it is.

True but not enough people distrust unsound science or political advocates who pose as scientists.

I thing there's some sort of reasoning error in that last bit, but I can't think of how to express it. My counter-argument would be something about how the benefit of reducing trust in bad science by having scientists lie would be less than the harm caused by the increased distrust in good science.

This kind of spoofing is supposedly done to make it more difficult to find nuclear designs, aside from the huge numbers of fake (law enforcement) buyers and sellers of WMD.

The same knowledge necessary to manufacturing such diseases is also necessary to knowing how to combat them or attenuate them. Besides; synthetic biology is far more capable of producing "superkillers" than this.

According to the article, they were originally trying biotech methods, but then shifted to straightforward selective breeding, which worked better.

For already existing viral strains, that's to be expected. I don't know if you've ever had discussions with synthetic biology students but... as Hugh Hixon at Alcor once said to me, "that is the stuff of nightmares, assuming you can even sleep afterwards." Fully-novel genetic constructs, hybridization of various unlike genomes, or even more potentially exotic constructs such as fungal spores that upon contact with human secretions re-express (medusa-like) into something akint o Toxoplasmosis gondii only inducing schizophrenia, hyperaggression, and so on. (Why kill a population with a disease when you can make a nearly-unkillable environmental 'bloom' toxin that passes gene screening, has no observable external symptoms, and causes an entire society to turn into batshit-crazy homocidal axe-killers?)

Human "swine" flu is some scary shit. But compared to what synth-bio could achieve, I'm less worried about it. Especially considering we're already in the range of introducing, say, the biotoxin of the Irukandji to airborne molds. That right there would be capable of killing just about all animal life within the blooming pattern of the organism.

We are quite literally, I believe, less than three or four -- five at the most -- 20-year generations away from synthetic biology students' being capable of creating bioweapons capable of wiping out the human species. If not the entirety of all mammalian life.

I agree that synbio has some very nasty and rapidly emerging capabilities. However, with respect to your last paragraph are you also assuming that defenses don't improve? Fancy biotech enables better detectors and rapid creation of tailored countermeasures (including counter-organisms). Surveillance tech restricts what students can get away with, sterilization and isolation of environments becomes easier, etc.

However, with respect to your last paragraph are you also assuming that defenses don't improve?

The statements I made were agnostic as to the likelihood of a given event, as opposed to the raw capability of the devices -- that is, beyond saying that it would become a non-zero percent chance. Furthermore; it is generally true that defense is "harder" than offense when it comes to weapons-tech.

Even if better technology means defenses can improve, does that mean they will improve at a fast enough pace? I don't understand why your same logic wouldn't also imply the belief that it will be easier to make AI friendly when we understand more about AGI.

I don't understand why your same logic wouldn't also imply the belief that it will be easier to make AI friendly when we understand more about AGI.

Ceteris paribus, that argument does go through: for any given project, success is easier with more AGI understanding. That doesn't mean that we should expect AI to be safe, or that interventions to shift the curves don't matter. Likewise, the considerations I mentioned with respect to synbio make us safer to some extent, and I was curious as to Logos' evaluation of their magnitudes.

Okay, thanks for the clarification. If we would expect the magnitudes for synbio to be significantly higher (or lower) than for AGI, then I would be curious as to what differentiates the two situations (I could easily imagine that there is a difference, I just think it would be a good exercise to characterize it as precisely as possible).

ETA: Actually, I think there are some plausible arguments as to why AGI progress would be less relevant to AGI safety than one would expect naievely (due to the decoupling of beliefs and utility functions in Bayesian decision theory --- being an AGI hinges mostly on the belief part, whereas being an FAI hinges mostly on the utility function part). But I currently have a non-trivial degree of uncertainty over how correct these arguments are.

The bio guys I know thought doing and publishing this research was important because it underscores the hazard of factory pig and cattle farms. This experiment (they tell me) is ongoing without controls all over the globe. (I am not a biologist.)

Whenever I hear about scary technology in this general vein I find that I want to "make the scary go away", which sort of naturally leads to a search for solutions. However, when I stop to think about optimal meta-cognitive strategy I remember how useful it is to hold off on proposing solutions. Updating on this lesson, my next inclination, instead of proposing solutions, is to try to promote discussion that seems likely to raise issues relevant to generating and selecting plans.

So how about it. Does anyone notice things about the situation that constrain or expand the solution space?

Are you referring to this particular scenario and how the information-policy concerning this H5N1 strain should look like? Or are you asking for solutions regarding biological warfare/terrorism in general?

Not sure if I have anything valuable to say about how to handle such a situation on the global/national level, but if it comes to personal survival during a seriously deadly pandemic of the type that could be caused by the newly developed H5N1 strain, the best solution I can possibly think of is to live next to open water while owning a boat that is stockpiled with all the stuff you need. As soon as the thing spreads you get everyone and everything you value on the boat and off you go. Knowing how to navigate and operate that boat plus having the ability to get information from the outside world in order to plan your course of action is of course mandatory as well. The only thing that could now infect you on board are birds. Unfortunately I have neither a boat nor do I live next to open water, so in the "seriously deadly pandemic scenario" I'll be stuck with the sickos and probably die.

Next best thing is owning a gas mask and knowing how to use it properly.