Comment author: GeraldMonroe 30 June 2013 06:10:29AM *  4 points [-]

From reading Radical Abundance :

Drexler believes that not only are stable gears possible, but that every component of a modern, macroscale assembly-line can be shrunk to the nanoscale. He believes this because his calculations, and some experiments show that this works.

He believes that " Nanomachines made of stiff materials can be engineered to employ familiar kinds of moving parts, using bearings that slide, gears that mesh, and springs that stretch and compress (along with latching mechanisms, planetary gears, constant-speed couplings, four-bar linkages, chain drives, conveyor belts . . .)."

The power to do this comes from 2 sources. First of all, the "feedstock" to a nanoassembly factory always consists of the element in question bonded to other atoms, such that it's an energetically favorable reaction to bond that element to something else. Specifically, if you were building up a part made of covalently bonded carbon (diamond), the atomic intermediate proposed by Drexler is carbon dimers ( C---C ). See http://e-drexler.com/d/05/00/DC10C-mechanosynthesis.pdf

Carbon dimers are unstable, and the carbon in question would rather bond to "graphene-, nanotube-, and diamond-like solids"

The paper I linked shows a proposed tool.

Second, electrostatic electric motors would be powered by plain old DC current. These would be the driving energy to turn all the mechanical components of an MNT assembly system. Here's the first example of someone getting one to work I found by googling : http://www.nanowerk.com/spotlight/spotid=19251.php

The control circuitry and sensors for the equipment would be powered the same way.

An actual MNT factory would work like the following. A tool-tip like in the paper I linked would be part of just one machine inside this factory. The factory would have hundreds or thousands of separate "assembly lines" that would each pass molecules from station to station, and at each station a single step is perfomed on the molecule. One the molecules are "finished", these assembly lines will converge onto assembly stations. These "assembly stations" are dealing with molecules that now have hundreds of atoms in them. Nanoscale robot arms (notice we've already gone up 100x in scale, the robot arms are therefore much bigger and thicker than the previous steps, and have are integrated systems that have guidance circuitry, sensors, and everything you see in large industrial robots today) grab parts from assembly lines and place them into larger assemblies. These larger assemblies move down bigger assembly lines, with parts from hundreds of smaller sub-lines being added to them.

There's several more increases in scale, with the parts growing larger and larger. Some of these steps are programmable. The robots will follow a pattern that can be changed, so what they produce varies. However, the base assembly lines will not be programmable.

In principle, this kind of "assembly line" could produce entire sub-assemblies that are identical to the sub assemblies in this nanoscale factory. Microscale robot arms would grab these sub-assemblies and slot them into place to produce "expansion wings" of the same nanoscale factory, or produce a whole new one.

This is also how the technology would be able to produce things that it cannot already make. When the technology is mature, if someone loads a blueprint into a working MNT replication system, and that blueprint requires parts that the current system cannot manufacture, the system would be able to look up in a library the blueprints for the assembly line that does produce those parts, and automatically translate library instructions to instructions the robots in the factory will follow. Basically, before it could produce the product someone ordered, it would have to build another small factory that can produce the product. A mature, fully developed system is only a "universal replicator" because it can produce the machinery to produce the machinery to make anything.

Please note that this is many, many, many generations of technology away. I'm describing a factory the size and complexity of the biggest factories in the world today, and the "tool tip" that is described in the paper I linked is just one teensy part that might theoretically go onto the tip of one of the smallest and simplest machines in that factory.

Also note that this kind of factory must be in a perfect vacuum. The tiniest contaminant will gum it up and it will seize up.

Another constraint to note is this. In Nanosystems, Drexler computes that the speed of motion for a system that is 10 million times smaller is in fact 10 million times faster. There's a bunch of math to justify this, but basically, scale matters, and for a mechanical system, the operating rate would scale accordingly. Biological enzymes are about this quick.

This means that an MNT factory, if it used convergent assembly, could produce large, macroscale products at 10 million times the rate that a current factory can produce them. Or it could, if every single bonding step that forms a stable bond from unstable intermediates didn't release heat. That heat product is what Drexler thinks will act to "throttle" MNT factories, such that the rate you can get heat out will determine how fast the factory will run. Yes, water cooling was proposed :)

One final note : biological proteins are only being investigated as a boostrap. The eventual goal will use no biological components at all, and will not resemble biology in any way. You can mentally compare it to how silk and wood was used to make the first airplanes.

Comment author: Vaniver 25 June 2013 09:57:02PM 0 points [-]

It's not clear to me what you mean by "tearing apart a planet." Are you sifting out most of the platinum and launching it into orbit? Turning it into asteroids? Rendering the atmosphere inhospitable to humans?

Because I agree that the last is obviously possible, the first probably possible, the second probably impossible without ludicrous expenditures of effort. But it's not clear to me that any of those are things which nanotechnology would be the core enabler on.

If you mean something like "reshape the planet in its image," then again I think bacteria are a good judge of feasibility- because of the feedstock issues. As well, it eventually becomes more profitable to prey on the nanomachines around you than the inert environment, and so soon we have an ecosystem a biologist would find familiar.

Jumping to another description, we could talk about "revolutionary technologies," like the Haber-Bosch process, which consumes about 1% of modern energy usage and makes agriculture and industry possible on modern scales. It's a chemical trick that extracts nitrogen from its inert state in the atmosphere and puts it into more useful forms like ammonia. Nanotech may make many tricks like that much more available and ubiquitous, but I think it will be a somewhat small addition to current biological and chemical industries, rather than a total rewriting of those fields.

Comment author: GeraldMonroe 26 June 2013 07:17:38AM *  0 points [-]

This problem is very easy to solve using induction. Base step : the minimum "replicative subunit". For life, that is usually a single cell. For nano-machinery, it is somewhat larger. For the sake of penciling in numbers, suppose you need a robot with a scoop and basic mining tools, a vacuum chamber, a 3d printer able to melt metal powder, a nanomachinery production system that is itself composed of nanomachinery, a plasma furnace, a set of pipes and tubes and storage tanks for producing the feedstock the nanomachinery needs, and a power source.

All in all, you could probably fit a single subunit into the size and mass of a greyhound bus. One notable problem is that there's enough complexity here that current software could probably not keep a factory like this running forever because eventually something would break that it doesn't know how to fix.

Anyways, you set down this subunit on a planet. It goes to work. In an hour, the nanomachinery subunit has made a complete copy of itself. In somewhat more time, it has to manufacture a second copy of everything else. The nanomachinery subunit makes all the high end stuff - the sensors, the circuitry, the bearings - everything complex, while the 3d printer makes all the big parts.

Pessimistically, this takes a week. A greyhound bus is 9x45 feet, and there are 5.5e15 square feet on the earth's surface. To cover the whole planet's surface would therefore take 44 weeks.

Now you need to do something with all the enormous piles of waste material (stuff you cannot make more subunits with) and un-needed materials. So you reallocate some of the 1.3e13 robotic systems to build electromagnetic launchers to fling the material into orbit. You also need to dispose of the atmosphere at some point, since all that air causes each electromagnetic launch to lose energy as friction, and waste heat is a huge problem. (my example isn't entirely fair, I suspect that waste heat would cook everything before 44 weeks passed). So you build a huge number of stations that either compress the atmosphere or chemically bond the gasses to form solids.

With the vast resources in orbit, you build a sun-shade to stop all solar input to reduce the heat problem, and perhaps you build giant heat radiators in space and fling cold heat sinks to the planet or something. (with no atmospheric friction and superconductive launchers, this might work). You can also build giant solar arrays and beam microwave power down to the planet to supply the equipment so that each subunit no longer needs a nuclear reactor.

Once the earth's crust is gone, what do you do about the rest of the planet's mass? Knock molten globules into orbit by bombarding the planet with high energy projectiles? Build some kind of heat resistant containers that you launch into space full of lava? I don't know. But at this point you have converted the entire earth's crust into machines or waste piles to work with.

This is also yet another reason that AI is part of the puzzle. Even if failures were rare, there probably are not enough humans available to keep 1e13 robotic systems functioning, if each system occasionally needed a remote worker to log in and repair some fault. There's also the engineering part of the challenge : these later steps require very complex systems to be designed and operated. If you have human grade AI, and the hardware to run a single human grade entity is just a few kilograms of nano-circuitry (like the actual hardware in your skull), you can create more intelligence to run the system as fast as you replicate everything else.

Comment author: Vaniver 23 June 2013 08:36:56PM 5 points [-]

That's still pretty much a revolution, a technology that could be used to tear apart planets. It just might take a bit longer than it takes in pulp sci-fi.

That looks like it's missing the point to me. As one of my physics professors put it, "we already have grey goo. It's called bacteria." If living cells are as good as it gets, and e-coli didn't tear apart the Earth, that's solid evidence that nanosystems won't tear apart the Earth.

Comment author: GeraldMonroe 25 June 2013 09:07:47PM *  0 points [-]

Bacteria, as well as all life, are stuck at a local maximum because evolution cannot find optimal solutions. Part of Drexler's work is to estimate what the theoretical optimum solutions can do.

My statement "tear apart planets" assumed too much knowledge on the part of the reader. I thought it was frankly pretty obvious. If you have a controllable piece of industrial machinery that uses electricity and can process common elements into copies of itself, but runs no faster than bacteria, tearing apart a planet is a straightforward engineering excercise. I did NOT mean the machinery looked like bacteria in any way, merely that it could copy itself no faster than bacteria.

And by "copy itself", what I really meant is that given supplies of feedstock (bacteria need sugar, water, and a few trace elements...our "nanomachinery" would need electricity, and a supply of intermediates for every element you are working with in a pure form) it can arrange that feedstock into thousands of complex machine parts, such that the machinery that is doing this process can make it's own mass in atomically perfect products in an hour.

I'll leave it up to you to figure out how you could use this tech to take a planet apart in a few decades. I don't mean a sci-fi swarm of goo, I mean an organized effort resembling a modern mine or construction site.

Comment author: CarlShulman 14 June 2013 02:55:06AM 10 points [-]

I suppose the Less Wrong response to this argument would be: how many of them are signed up for cryonics?

LessWrongers, and high-karma LessWrongers, on average seem to think cryonics won't work, with mean odds of 5:1 or more against cryonics (although the fact that they expect it to fail doesn't stop an inordinate proportion from trying it for the expected value).

On the other hand, if mice or human organs were cryopreserved and revived without brain damage or loss of viability, people would probably become a lot more (explicitly and emotionally) confident that there is no severe irreversible information loss. Much less impressive demonstrations have been enough to create huge demand to enlist in clinical trials before.

Comment author: GeraldMonroe 23 June 2013 06:11:44AM 1 point [-]

Alas, cryonics may be screwed with regards to this. It simply may not be physically possible to freeze something as large and delicate as a brain without enough damage to prevent you from thawing it and have it still work. This is of course is no big deal if you just want the brain for the pattern it contains. You can computationally reverse the cracks and to a lesser extent some of the more severe damage the same way we can computationally reconstruct a shredded document.

The point is, I think in terms of relative difficulty, the order is :
1. Whole brain emulation 2. Artificial biological brain/body 3. Brain/body repaired via MNT 4. Brain revivable with no repairs.

Note that even the "easiest" item on this list is extremely difficult.

Comment author: leplen 22 June 2013 07:16:12PM 2 points [-]

Technically the blindfold was intended to refer to the fact that you can't make measurements on the system while you're shaking the box because your measuring device will tend to perturb the atoms you're manipulating.

The walls of the box that you're using to push the legos around was intended to refer to our ability to only manipulate atoms using clumsy tools and several layers of indirection, but we're basically on the same page.

Comment author: GeraldMonroe 23 June 2013 06:01:01AM *  1 point [-]

This is also wrong. The actual proposals for MNT involve creating a system that is very stable, so you can measure it safely. The actual machinery is a bunch of parts that are as strong as they can possibly be made (this is why the usual proposals involve covalent bonded carbon aka diamond) so they are stable and you can poke them with a probe. You keep the box as cold as practical.

It's true that even if you set everything up perfectly, there are some events that can't be observed directly, such as bonding and rearrangements that could destroy the machine. In addition, practical MNT systems would be 3d mazes of machinery stacked on top of each other, so it would be very difficult to diagnose failures. To summarize : in a world with working MNT, there's still lots of work that has to be done.

Comment author: leplen 22 June 2013 07:32:54PM *  9 points [-]

Nothing like it? Map the atoms to individual pieces of legos, their configuration relative to each other (i.e. lining up the pegs and the holes) was intended to capture the directionality of covalent bonds. We capture forces and torques well since smaller legos tend to be easier to move, but harder to separate than larger legos. The shaking represents acting on the system via some therodynamic force. Gravity represents a tendency of things to settle into some local ground state that your shaking will have to push them away from. I think it does a pretty good job capturing some of the problems with entropy and exerted forces producing random thermal vibrations since those things are true at all length scales. The blindfold is because you aren't Laplace's demon, and you can't really measure individual chemical reactions while they're happening.

If anything, the lego system has too few degrees of freedom, and doesn't capture the massiveness of the problem you're dealing with because we can't imagine a mol of lego pieces.

I try not to just throw out analogies willy-nilly. I really think that the problem of MNT is the problem of keeping track of an enormous number of pieces and interactions, and pushing them in very careful ways. I think that trying to shake a box of lego is a very reasonable human-friendly approximation of what's going on at the nanoscale. I think my example doesn't do a good job describing the varying strengths or types of molecular bonds, nor does it capture bond stretching or deformation in a meaningful way, but on the whole I think that saying it's nothing like the problem of MNT is a bit too strong a statement.

Comment author: GeraldMonroe 23 June 2013 05:39:29AM *  2 points [-]

The way biological nanotechnology (aka the body you are using to read this) solves this problem is it bonds the molecule being "worked on" to a larger, more stable molecule. This means instead of whole box of legos shaking around everywhere, as you put it, it's a single lego shaking around bonded to a tool (the tool is composed of more legos, true, but it's made of a LOT of legos connected in a way that makes it fairly stable). The tool is able to grab the other lego you want to stick to the first one, and is able to press the two together in a way that makes the bonding reaction have a low energetic barrier. The tool is shaped such that other side-reactions won't "fit" very easily.

Anyways, a series of these reactions, and eventually you have the final product, a nice finished assembly that is glued together pretty strongly. In the final step you break the final product loose from the tool, analagous to ejecting a cast product from a mold. Check it out : http://en.wikipedia.org/wiki/Pyruvate_dehydrogenase

Note a key difference here between biological nanotech (life) and the way you described it in the OP. You need a specific toolset to create a specific final product. You CANNOT make any old molecule. However, you can build these tools from peptide chains, so if you did want another molecule you might be able to code up a new set of tools to make it. (and possibly build those tools using the tools you already have)

Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life - it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon. The idea here is to make your "lego" analogy manageable. All the "legos" in the box are glued tightly to one another (low temperature, strong covalent bonds) except for the ones you are actually playing with. No extraneous legos are allowed to enter the box (vacuum chamber)

If you want to bond a blue lego to a red lego, you force the two together in a way that controls which way they are oriented during the bonding. Check it out : http://www.youtube.com/watch?v=mY5192g1gQg

Current organic chemical synthesis DOES operate as a box of shaking legos, and this is exactly why it is very difficult to get lego models that come out without the pieces mis-bonded. http://en.wikipedia.org/wiki/Thalidomide

As for your "Shroedinger Equations are impractical to compute" : what this means is that the Lego Engineers (sorry, nanotech engineers) of the future will not be able to solve any problem in a computer alone, they'll have to build prototypes and test them the hard way, just as it is today.

Also, this is one place where AI comes in. The universe doesn't have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.

Comment author: CellBioGuy 22 June 2013 02:28:21PM *  24 points [-]

Life is a wonderful example of self-assembling molecular nanotechnology, and as such gives you a template of the sorts of things that are actually possible (as opposed to Drexlerian ideas). That is to say, everything is built from a few dozen stereotyped monomers assembled into polymers (rather than arranging atoms arbitrarily), there are errors at every step of the way from mutations to misincorporation of amino acids in proteins so everything must be robust to small problems (seriously, like 10% of the large proteins in your body have an amino acid out of place as opposed to being built with atomic precision and they can be altered and damaged over time), it uses a lot of energy via a metabolism to maintain itself in the face of the world and its own chemical instability (often more energy than is embodied in the chemical bonds of the structure itself over a relatively short time if it's doing anything interesting and for that matter building it requires much more energy than is actually embodied), you have many discrete medium-sized molecules moving around and interacting in aqueous solution (rather than much in the way of solid-state action) and on scales larger than viruses or protein crystals everything is built more or less according to a recipe of interacting forces and emergent behavior (rather than having something like a digital blueprint).

So yeah, remarkable things are possible, most likely even including things that naturally-evolved life does not do now. But there are limits and it probably does not resemble the sorts of things described in "Nanosystems" and its ilk at all.

Comment author: GeraldMonroe 23 June 2013 05:06:00AM 0 points [-]

Nanosystems discusses theoretical maximums. However, even if you make the assumption that living cells are as good as it gets, an e-coli, which we know from extensive analysis uses around 25,000 moving parts, can double itself in 20 minutes.

So in theory, you have some kind of nano-robotic system that is able to build stuff. Probably not any old stuff - but it could produce tiny subunits that can be assembled to make other nano-robotic systems, and other similar things.

And if it ran as fast as an e-coli, it could build itself every 20 minutes.

That's still pretty much a revolution, a technology that could be used to tear apart planets. It just might take a bit longer than it takes in pulp sci-fi.

Comment author: Epiphany 16 June 2013 11:22:08PM *  1 point [-]

Thanks for the hardware info.

An autonomous killing drone system would save soldier's lives and kill fewer civilians.

In the short-term... What do you think about the threat they pose to democracy?

drawbacks include high cost to develop and maintain

Do you happen to know how many humans need to be employed for a given quantity of these weapons to be produced?

Comment author: GeraldMonroe 17 June 2013 06:25:44PM 2 points [-]

I wanted to make a concrete proposal. Why does it have to be autonomous? Because in urban combat, the combatants will usually choose a firing position that has cover. They "pop up" from the cover, take a few shots, then position themselves behind cover again. An autonomous system could presumably accurately return fire much faster than human reflexes. (it wouldn't be instant, there's a delay for the servos of the automated gun to aim at the target, and delays related to signals - you have to wait for the sound to reach all the acoustic sensors in the drone swarm, then there's processing delays, then time for the projectiles from the return fire to reach the target)

Also, the autonomous mode would hopefully be chosen only as a last resort, with a human normally in the loop somewhere to authorize each decision to fire.

As for a threat to democracy? Defined how? You mean a system of governance where a large number of people, who are easily manipulated via media, on the average know fuck-all about a particular issue, are almost universally not using rational thought, and the votes give everyone a theoretically equal say regardless of knowledge or intelligence?

I don't think that democracy is something that should be used as an ideal nor a terminal value on this website. It has too many obvious faults.

As for humans needing to be employed : autonomous return fire drones are going to be very expensive to build and maintain. That "expense" means that the labor of thousands is needed somewhere in the process.

However, in the long run, obviously it's possibly to build factories to churn them out faster than replacing soldiers. Numerous examples of this happened during ww2, where even high technology items such as aircraft were easier to replace than the pilots to fly them.

Comment author: GeraldMonroe 16 June 2013 03:13:54PM *  4 points [-]

Let's talk actual hardware.

Here's a practical, autonomous kill system that is possibly feasible with current technology. A network of drone helicopters armed with rifles and sensors that can detect the muzzle flashes, sound, and in some cases projectiles of an AK-47 being fired.

Sort of this aircraft : http://en.wikipedia.org/wiki/Autonomous_Rotorcraft_Sniper_System

Combined with sensors based on this patent : http://www.google.com/patents/US5686889

http://en.wikipedia.org/wiki/Gunfire_locator

and this one http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1396471&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F9608%2F30354%2F01396471

The hardware and software would be optimized for detecting AK-47 fire, though it would be able to detect most firearms. Some of these sensors work best if multiple platforms armed with the same sensor are spread out in space, so there would need to be several of these drones hovering overhead for maximum effectiveness.

How would this system be used? Whenever a group of soldiers leaves the post, they would all have to wear blue force trackers that clearly mark them as friendly. When they are at risk for attack, a swarm of drones follows them overhead. If someone fires at them, the following autonomous kill decision is made

if( SystemIsArmed && EventSmallArmsFire && NearestBlueForceTracker > X meters && ProbableError < Y meters) ShootBack();

Sure, a system like this might make mistakes. However, here's the state of the art method used today :

http://www.youtube.com/watch?list=PL75DEC9EEB25A0DF0&feature=player_detailpage&v=uZ2SWWDt8Wg

This same youtube channel has dozens of similar combat videos. An autonomous killing drone system would save soldier's lives and kill fewer civilians. (drawbacks include high cost to develop and maintain)

Other, more advanced systems are also at least conceivable. Ground robots that could storm a building, killing anyone carrying a weapon or matching specific faces? The current method is to blow the entire building to pieces. Even if the robots made frequent errors, they might be more effective than bombing the building.

Comment author: nigerweiss 29 May 2013 08:31:43AM 2 points [-]

Building a whole brain emulation right now is completely impractical. In ten or twenty years, though... well, let's just say there are a lot of billionaires who want to live forever, and a lot of scientists who want to be able to play with large-scale models of the brain.

I'd also expect de novo AI to be capable of running quite a bit more efficiently than a brain emulation for a given amount of optimization power.. There's no way simulating cell chemistry is a particularly efficient way to spend computational resources to solve problems.

Comment author: GeraldMonroe 29 May 2013 01:40:45PM -2 points [-]

An optimal de novo AI, sure. Keep in mind that human beings have to design this thing, and so the first version will be very far from optimal. I think it's a plausible guess to say that it will need on the order of the same hardware requirements as an efficient whole brain emulator.

And this assumption shows why all the promises made by past AI researchers have so far failed : we are still a factor of 10,000 or so away from having the hardware requirements, even using supercomputers.

View more: Prev | Next