ewbrownv comments on Mini advent calendar of Xrisks: synthetic biology - Less Wrong

6 Post author: Stuart_Armstrong 04 December 2012 11:15AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread.

Comment author: ewbrownv 04 December 2012 08:14:42PM 2 points [-]

The reason the life sciences are resistant to regulation is at least partially because they know that killer plagues are several orders of magnitude harder to make than Hollywood would like you to think. The biosphere already contains billions of species of microorganisms evolving at a breakneck pace, and they haven't killed us all yet.

An artificial plague has no special advantages over natural ones until humans get better at biological design than evolution, which isn't likely to happen for a couple of decades. Even then, plagues with 100% mortality are just about impossible - turning biotech from a megadeath risk to an xrisk requires a level of sophistication that looks more like Drexlerian nanotech than normal biology.

Comment author: Eugine_Nier 05 December 2012 07:31:07AM 5 points [-]

An artificial plague has no special advantages over natural ones

Artificial plagues can be optimized to for maximum human deaths, something natural plagues aren't. Artificial plagues can contain genes spliced in from unrelated species, including the target. For example, human hormones.

Comment author: roystgnr 06 December 2012 06:49:03PM 1 point [-]

If you believe the "trade-off hypothesis" then many natural plagues are optimized against human deaths - any time a pathogen mutation is virulent enough to get its host shunned or its host's tribe wiped out, that variant of the pathogen dies out too.

The "host shunned" version of that hypothesis still applies to existential risks from communicable disease. If stone age tribes can quarantine well enough to prevent contagion then so can we. But the "host's tribe wiped out" version is probably moot now. Transmitting a pathogen beyond one's immediate "tribe" is surely much easier in airports full of friendly people than it was on footpaths connecting sometimes-hostile neighbors.

Comment author: Eugine_Nier 07 December 2012 01:50:36AM 2 points [-]

But the "host's tribe wiped out" version is probably moot now. Transmitting a pathogen beyond one's immediate "tribe" is surely much easier in airports full of friendly people than it was on footpaths connecting sometimes-hostile neighbors.

It's still possible for a plague to be so virulent that it kills its host before he has a chance to spread it too widely. Ebola comes to mind.

Comment author: Stuart_Armstrong 04 December 2012 08:30:03PM 4 points [-]

Evolution happens blindingly fast for viruses, which means that humans can co-opt it. Are you really confident that the combination of directed evolution and some centralised design won't reach devastating results? After all, the deadliness, incubation period and transmissibility are already out there in nature; it would just be a question of putting the pieces together.

Comment author: CarlShulman 04 December 2012 08:52:04PM *  4 points [-]

After all, the deadliness, incubation period and transmissibility are already out there in nature; it would just be a question of putting the pieces together.

There are tradeoffs between those three. E.g. the recent "airborne H5N1" experiments produced airborne transmission at the expense of almost all deadliness.

However, there are very nasty things that could be done by bypassing the normal fitness landscape that would let a less-lethal variant outcompete a more-lethal variant (providing immunity to survivors against more lethal variants) to directly produce numerous deadly viruses acting on different mechanisms so that immunity to one does not protect against the others.

Again, as with nukes I would say the x-risk potential looks a lot smaller than the catastrophic risk potential, but not negligible.

Comment author: ewbrownv 05 December 2012 08:25:21PM 1 point [-]

Exactly.

I think the attitudes of most experts are shaped by the limits of what they can actually do today, which is why they tend not to be that worried about it. The risk will rise over time as our biotech abilities improve, but realistically a biological xrisk is at least a decade or two in the future. How serious the risk becomes will depend on what happens with regulation and defensive technologies between now and then.