The FHI's mini advent calendar: counting down through the big five existential risks. The third one is a also a novel risk: nanotechnology.

 

Nanotechnology

Current understanding: low
Most worrying aspect: the good stuff and the bad stuff are the same thing

The potential of nanotechnology is its ability to completely transform and revolutionise manufacturing and materials. The peril of nanotechnology is its ability to completely transform and revolutionise manufacturing and materials. And it’s hard to separate the two. Nanotech manufacturing promises to be extremely disruptive to existing trade arrangements and to the balance of economic power: small organisations could produce as many goods as much as whole countries today, collapsing standard trade relationships and causing sudden unemployment and poverty in places not expecting this.

And in this suddenly unstable world, nanotechnology will also permit the mass production of many new tools of war – from microscopic spy drones to large scale weapons with exotic properties. It will also weaken trust in disarmament agreements, as a completely disarmed country would have the potential to assemble an entire arsenal – say of cruise missiles – in the span of a day or less.

 

New Comment
25 comments, sorted by Click to highlight new comments since:

Most of the capabilities offered for hypothetical Drexlerian technology seem to be just quantitative increases in already existing trends:

  • Production of more nuclear weapons; nuclear arsenals are down from the Cold War, and vastly, vastly, more nuclear weapons could be constructed with existing military budgets
  • More computation enabling AI run amok; cf. Moore's Law
  • Artificial diseases and disruptive organisms/'grey goo'; cf. synthetic biology
  • More conventional weapons; there are already plenty of weapons to kill most people, but the fatality rate would decline as populations fell
  • Some kind of non-AGI robotic weapons that keep killing survivors even as population crashes, and aren't recalled by either side, as in the SF story Second Variety; this is a question of improved robotics and manufacturing productivity, but 'nanotech' isn't that different from very efficient automated factories

I don't see much distinctive 'nanotechnology x-risk' that couldn't be realized by continued ordinary technological progress and much improved automation. So any significance has to come from nanotechnology prospects boosting our expectation of those capabilities on some timescales, which demands some argument that nanotech is going to progress faster than expected and drive those fields ahead of trend..

The theory is that Drexlerian nanotech would dramatically speed up progress in several technical fields (biotech, medicine, computers, materials, robotics) and also dramatically speed up manufacturing all at the same time. If it actually works that way the instability would arise from the sudden introduction of new capabilities combined with the ability to put them into production very quickly. Essentially, it lets innovators get inside the decision loop of society at large and introduce big changes faster than governments or the general public can adapt.

So yes, it's mostly just quantitative increases over existing trends. But it's a bunch of very large increases that would be impossible without something like nanotech, all happening at the same time.

Good way of phrasing it.

[-]TimS70

My relatively uninformed impression was that the particularly unique nanotech risk was poor programming leading to grey goo.

Is there a reason that economic disruption or increased weapon capacity are greater x-risks - which I thought were focused on under-appreciated but extreme downside risks. The examples from the article have greater expected harm because they are higher probability, but x-risks are civilization or humanity destroyers, aren't they? Does economic disruption really have that large a downside?

My relatively uninformed impression was that the particularly unique nanotech risk was poor programming leading to grey goo.

The problem is that the grey goo has to out-compete the biosphere, which is hard if you're designing nanites from scratch. If you're basing them of existing lifeforms, that's synthetic biology.

Yes, it's very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size. Early attempts will probably fail completely, and then we'll have a few generations of devices that are only superior in some narrow specialty or in controlled environments.

But just as with robots, the design space of nanotech devices is vastly larger than that of biological life. We can easily imagine an industrial ecology of Von Neumann machines that spreads itself across a planet exterminating all large animal life, using technologies that such organisms can't begin to compete with (mass production, nuclear power, steel armor, guns). Similarly, there's a point of maturity at which nanotech systems built with technologies microorganisms can't emulate (centralized computation, digital communication, high-density macroscopic energy sources) become capable of displacing any population of natural life.

So I'd agree that it isn't going to happen by accident in the early stages of nanotech development. But at some point it becomes feasible for governments to design such a weapon, and after that the effort required goes down steadily over time.

Yes, it's very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size.

One difference is that the reproduction rate, and hence rate of evolution, of micro-organisms is much faster.

Does economic disruption really have that large a downside?

Not on its own - but massive disruption at the same time that unprecedented manufacturing capacity existed, could lead to devastating long term wars. If nanotechnology was cheap and easy, once developed, these wars might go on perpetually.

Wouldn't poor programming make grey goo a type of unfriendly A.I.? If so, then that would justify leaving it out of this description, as nanotechnology would just facilitate the issue. The computer commanding the nanobots would be the core problem.

[-]TimS20

Some of this is terminology, used with intent to narrow the topic - when Eliezer talks about FAI and uFAI, he's mostly talking about potential Artificial General Intelligence. A nano-machine that makes additional copies of itself as part of its programming is not necessarily a General Intelligence. Most of the predicted uses of nano-machines wouldn't require (or be designed to have) general intelligence.

I'm very aware that giving a terminological answer conceals that there is no agreement on what is or isn't "General Intelligence." About all we can agree on is that human intelligence is the archetype.

To put it slightly differently, one could argue that the laptop I'm using right now is a kind of Intelligence. And it's clearly Artificial. But conversations about Friendly and unFriendly aren't really about my laptop.

Fair enough, but the grey goo issue is still probably based enough in programming to categorize it separately from the direct implications of nanotechnological production.

[-]TimS00

Eh, I guess. I'm not a big fan of worrying about the consequences of something that both (a) works exactly as intended and (b) makes us richer.

So I think it is conflating problems to worry about weapon production and the general man's-inhumanity-to-man problem when the topic is nanotechnology.

More importantly, the exact issue I'm worried about (poor programming of something powerful and barely under human control that has nothing to do with figuring out human morality) seems like it is going to be skipped.

as a completely disarmed country would have the potential to assemble an entire arsenal – say of cruise missiles – in the span of a day or less.

Why would the existence of nanotechnology imply that it is very rapid? I can easily imagine needing a large factory to build the more complicated nanotech. Just like with current technology it takes months and millions of dollars of investment to create computer chips.

Remember that technology is not magic, and we shouldn't base our inferences on science fiction books. Change will happen gradually, nanotech assemblers will at first be crude, coarse grained, unreliable and expensive. These machines will require power and raw materials, which will not suddenly be free. For most products, traditional manufacturing will remain orders of magnitude more efficient. Just like the desktop printer didn't eliminate the printing press.

Drexler has some scenarios, based as far as I can tell in solid science, showing that the nanotech manufacturing revolution could be extremely rapid. And an economy based upon raw materials and energy is very far from our current one (and nanotech recycling could have large effects on the need for raw materials; energy is the main bottle neck, in theory).

You would need some kind of energy source with a very high and rapid EROEI to scale up in such a sudden way, e.g. solar cells that required very, very, little energy to make, including harvesting all the raw materials.

You seem to be arguing that we can't have a massive gain in value just from re-arranging our current resources better. The raw energy and resource requirements to build a cruise missile are pretty small; given unenriched uranium, the raw energy required to build a nuclear-armed cruise missile is also pretty small. Not to mention tiny camera and drones; a lot of designs are out there, just impossible to assemble in current technology.

So 2 and 1 have to be runaway global warming and AI risk?

I think that global chemical contamination is underestimated. 1000 tons of dioxin could be enough to finish human race and it could be manufactured with existing technologies by a rogue state.

How would the dioxin be distributed? Decades ago, someone pointed out that there were enough pins to destroy the human race if a pin was stuck in every person's heart.

Dioxin is very strong chemical which could exist in nature for decades. If it were used as a doomsday weapon I think it could be made airborn on height of 10-20 kilometers from where it could slowly rain all over the world. Or it could be put in the oceans, contaminating all marine life and people who eat fishes. Dioxin is known to slowly accumulate in living tishues. Italy still strugle from the catastrophe where 1 kilogramm of dioxin was relized in nature in 1976. http://en.wikipedia.org/wiki/Seveso_disaster

But the main problem in my opinion is that the aproach of the author of the post is misleading. Risks are not mutualy independent, which could be suggested when we see a list of main 5 risks. For example by the methods of syntetic biology someone could create a plant which could produce dioxin or other toxins and which could colonise the surface of the Earth killing all other species.

Or nuclear war could result in counter attack by biological weapons. So interraction of risks could be more important when pure risks themselves. But x-risks researchers are attached to the magic of creating list of risks. I could call it list-bias.

Italy still strugle from the catastrophe where 1 kilogramm of dioxin was relized in nature in 1976. http://en.wikipedia.org/wiki/Seveso_disaster

And also 6 tonnes of other crap; further, I'm not terribly impressed when I read about 0 increase in all-cause mortality decades after the disaster, countered only by a later paper that smells like datamining ('neoplasms'? really? where's my all-cause mortality? That's what we care about!)

That's reasonable. For example, a big war (not necessarily nuclear) could increase the risk of infrastructure disaster.

You're half right !

The missing one is an xrisk that's been around for a long, long time.

[-][anonymous]00

Giant space rocks?

Nope! Not enough of the big ones around currently.