The FHI's mini advent calendar: counting down through the big five existential risks. The second one is a new, exciting risk: synthetic biology.

Synthetic biology
Current understanding: medium-low
Most worrying aspect: hackers experimenting with our basic biology
Synthetic biology covers many inter-related fields, all concerned with the construction and control of new biological systems. This area has already attracted the attention of bio-hackers, experimenting with DNA and other biological systems to perform novel tasks – and gaining kudos for exotic accomplishments. The biosphere is filled with many organisms accomplishing specific tasks; combining these and controlling them could allow the construction of extremely deadly bioweapons, targeted very narrowly (at all those possessing a certain gene, for instance). Virulent virus with long incubation periods could be constructed, or common human bacteria could be hacked to perform a variety of roles in the body. And humans are not the only potential targets: whole swaths of the ecosystem could be taken down, either to gain commercial or economic advantages, for terrorist purposes, or simply by accident.

Moreover, the medical miracles promised by synthetic biology are not easily separated from the danger: the targeted control needed to, for instance, kill cancer cells, could also be used to target brain cells or the immune system. This would not be so frightening if the field implemented safety measures commensurate with the risks; but synthetic biology has been extremely lax in its precautions and culturally resistant to regulations.

New Comment
26 comments, sorted by Click to highlight new comments since:

Are there more detailed discussions of this risk that you can link to? I'm curious about synthetic biology's practitioners' justifications (or rationalizations) for having few safety measures, given how seemingly obvious the risk is to an outsider. In general, it seems like you're wasting the attention that you're drawing to this countdown by not pointing people to more information.

As a synthetic biologist, I justify the lack of regulation of my work by saying "I don't do that kind of synthetic biology." Saying "synthetic biology" could lead to a super-plague is like saying "computer science" could lead to an uFAI. Sure, but the majority of the work is harmless. The VAST majority. The only real safety measures I can think of that would work at this point would be censorship of disease research.

Do you think that eventually an individual or a small group of people (terrorists, doomsday cultists, etc.) will be able to engineer a very destructive biological weapon using off the shelf synthetic biology tools? Or will synthetic biology never develop to that point, or it's likely that synthetic biology or other technologies will give us defenses against such weapons before they become possible, or do you see effective regulation happening first, or something else?

For a given value of off-the-shelf, I think that's doable now. Not x-risk destructive, but millions dead. I haven't looked too closely at it in the hopes of staying off watch-lists. The materials necessary are much easier to obtain than those necessary for other sorts of weapon, so I'm guessing our only defense right now is the high level of education required to do it. Regulating this will be like trying to keep people from using 3D-printers to make weapons: your only real option is to forbid all the printers. And you couldn't just forbid "synthetic biology". You would basically have to take all molecular biology off the table to prevent people from being able to build diseases.

Alas, there's very little info! I've been going through some papers today, but I don't trust the quality.

This is one of the things the FHI would do, with a lot more resources.

Perhaps some low hanging fruit for pandemic prevention would be better disinfectants. Bleach can kill 99.999% of pathogens, but it causes nasty fumes and irritation on skin contact. If you can get the same results as bleach with reduced negative side effects, it should result in more people using it in more situations. There's a new disinfectant called Steriplex SD that seems promising.

Another thing worth considering is that more people might do bio-hacking at home if sterilization were less of a pain. This could increase or decrease risk depending on your perspective (more potentially competent counter-agents, e.g.)

The problem with such things is that bleach (and alcohol) kill everything. If you make something that is safe for humans, the mechanism that protects our cells could probably work for the infectious agents as well.

Apparently somehow Steriplex SD (in its activated form) is only a mild lung/skin/eye irritant. See the MSDS here. It contains elemental silver, peroxyacetic acid, acetic acid, ethanol, and hydrogen peroxide.

While it does kill some nasty bacterial spores like Chlostridium difficile, this particular product doesn't work on anthrax spores (neither does bleach). They have another product called Steriplex Ultra that does kill Anthrax, which is somewhat more hazardous (you have to wear a breathing mask), containing glycerol and higher concentrations of the active ingredients.

An example of a nasty trick that would make for a relatively easy to produce and deploy bio-weapon : http://www.plospathogens.org/article/info%3Adoi%2F10.1371%2Fjournal.ppat.1001257

Inhaled prions have extremely long incubation times (years), so it would be possible for an attacker to expose huge numbers of people unknowingly to them. The disease it causes is slow and insidious, and as of today, there is no way to detect it until post-mortem. There's no treatment, either. I'm not certain of the procedure for making prions in massive quantities in the laboratory, but since they are self-replicating if placed in a medium containing the respective protein, they probably could be mass-produced.

On the bright side, the disease would not be self-replicating in the wild, so it would not be an existential risk - merely a very nasty way to cause mass casualties. Also, this method has never been tested on humans, so it might not be very effective, so one can hope that terrorists will stick with bombs.

The reason the life sciences are resistant to regulation is at least partially because they know that killer plagues are several orders of magnitude harder to make than Hollywood would like you to think. The biosphere already contains billions of species of microorganisms evolving at a breakneck pace, and they haven't killed us all yet.

An artificial plague has no special advantages over natural ones until humans get better at biological design than evolution, which isn't likely to happen for a couple of decades. Even then, plagues with 100% mortality are just about impossible - turning biotech from a megadeath risk to an xrisk requires a level of sophistication that looks more like Drexlerian nanotech than normal biology.

An artificial plague has no special advantages over natural ones

Artificial plagues can be optimized to for maximum human deaths, something natural plagues aren't. Artificial plagues can contain genes spliced in from unrelated species, including the target. For example, human hormones.

If you believe the "trade-off hypothesis" then many natural plagues are optimized against human deaths - any time a pathogen mutation is virulent enough to get its host shunned or its host's tribe wiped out, that variant of the pathogen dies out too.

The "host shunned" version of that hypothesis still applies to existential risks from communicable disease. If stone age tribes can quarantine well enough to prevent contagion then so can we. But the "host's tribe wiped out" version is probably moot now. Transmitting a pathogen beyond one's immediate "tribe" is surely much easier in airports full of friendly people than it was on footpaths connecting sometimes-hostile neighbors.

But the "host's tribe wiped out" version is probably moot now. Transmitting a pathogen beyond one's immediate "tribe" is surely much easier in airports full of friendly people than it was on footpaths connecting sometimes-hostile neighbors.

It's still possible for a plague to be so virulent that it kills its host before he has a chance to spread it too widely. Ebola comes to mind.

Evolution happens blindingly fast for viruses, which means that humans can co-opt it. Are you really confident that the combination of directed evolution and some centralised design won't reach devastating results? After all, the deadliness, incubation period and transmissibility are already out there in nature; it would just be a question of putting the pieces together.

After all, the deadliness, incubation period and transmissibility are already out there in nature; it would just be a question of putting the pieces together.

There are tradeoffs between those three. E.g. the recent "airborne H5N1" experiments produced airborne transmission at the expense of almost all deadliness.

However, there are very nasty things that could be done by bypassing the normal fitness landscape that would let a less-lethal variant outcompete a more-lethal variant (providing immunity to survivors against more lethal variants) to directly produce numerous deadly viruses acting on different mechanisms so that immunity to one does not protect against the others.

Again, as with nukes I would say the x-risk potential looks a lot smaller than the catastrophic risk potential, but not negligible.

Exactly.

I think the attitudes of most experts are shaped by the limits of what they can actually do today, which is why they tend not to be that worried about it. The risk will rise over time as our biotech abilities improve, but realistically a biological xrisk is at least a decade or two in the future. How serious the risk becomes will depend on what happens with regulation and defensive technologies between now and then.

A lot of the advances made by synthetic biology are separated rather widely from the danger. This is from ~10 minutes of searching the iGEM website. I am not down-playing the risk posed by synthetic biology, but it's not all, or mostly, dangerous. It's mostly awesome. Biodetectors Cheap disease diagnosis Bioremediation

Yes - it's not like this is all bad! If it were all bad, it would be easy to stop people doing it.

How far do you think we are from home build-a-virus kits?

No real idea - more research is needed (by risk analysts), and even then, we don't know what may or may not be discovered soon in synthetic biology.

Being possible or being on the market? Possible: 5 years. On the market: why would this ever be on the market?!

Something like the DIYbio movement might want to put something like that on the market, so that people could learn to play with genomes the same way that home computers taught people to play with software.

its helpful

One thing that make bioweapons truely existential risk is the possibility of multipandemic. Each bioweapon agent could not wipe out humanity because individual differences of the people. 50 per cent mortality for all humans from one agent is good estinate and it would be great catastrophe but not x-risks. If somehow 20 different agents with such mortality will be relized it would mean survaval 1 person from a million, and as they will be far from each other it most likely means human extinction.

The multipandemic could result from all out bioweapon war or be deliberately created as Doomsday weapon or result from quick development of cheap biolabs which will be simultaniosly availabel to large group of people. It could happene in next 10-20 years.

:(

In such cases people in government bunkers and the Amazon act as reservoirs/backstops.

Unless a weapon were engineered to target crop plants instead of humans.

EDIT: or/and humans