Not an expert in chemistry or biochemistry, but this post seems to basically not engage with the feasibility studies Drexler has made in Nanosystems, and makes a bunch of assertions without justification, including where Nanosystems has counterarguments. I wish more commenters would engage on the object level because I really don't have the background to, and even I see a bunch of objections. Nevertheless I'll make an attempt. I encourage OP and others to correct me where I am ignorant of some established science.
Points 1, 2, 3, 4 are not relevant to Drexlerian nanotech and seem like reasonable points for other paradigms.
Regarding 5, my understanding is that mechanosynthesis involves precise placement of individual atoms according to blueprints, thus making catalysts that selectively bind to particular molecules unnecessary.
...6. no liquid
Any self-replicating nanobot must have many internal components. If the interior is not filled with water, those components will clump together and be unable to move around, because electrostatic & dispersion interactions are proportionately much stronger on a small scale. The same is true to a lesser extent for the nanobots themselves.Vacuum is
I want to remind everybody how efficient molecular machinery is in terms of thermodynamics:
this molecule [RNA] operates quite near the limit of thermodynamic efficiency [7 kcal/mol] set by the way it is assembled [~10 kcal/mol].
and
these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability.
From an article Statistical Physics of Self-replication by Jeremy England
deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids.
https://aip.scitation.org/doi/10.1063/1.4818538
That said I think that there may be many sweet spots for a combination of ma...
In the context of AI x-risk, I’m mainly interested in
[(2) is obviously possible once you have a few billion human-level-intelligent robots, but the question is “can nanotech dramatically reduce the amount of time that the AI is relying on human help, compared to that baseline?”. Presumably “being able to make arbitrarily more chips or chip-equivalents” would be the most difficult ingredient.]
In both cases it seems to me that the answer is “obviously yes”:
Therefore grey goo as defined in this post doesn’t seem too relevant for my AI-related questions. Like, if the AI doesn’t have a plan to make nanotech things that can exterminate / outcompete microbes living in rocks deep under the seafloor—man, I just don’t care.
None of this is meant to be a criticism of this post, which I’m glad exists, even if I’m not in a position to evaluate it. Indeed, I’m not even sure OP would disagree with my comment here (based on their main AI post).
The merit of this post is to taboo nanotech. Practical bottom-up nanotech is simply synthetic biology, and practical top-down nanotech is simply modern chip lithography. So:
1.) can an AI use synthetic bio as a central ingredient of a plan to wipe out humanity?
Sure.
2.) can an AI use synthetic bio or chip litho a central ingredient of a plan to operate perpetually in a world without humans?
Sure
But doesn't sound as exciting? Good.
Another merit of the OP might be in pointing out bullshit by Eliezer Yudkowsky/Eric Drexler?
It's kind of unfortunate if key early figures in the rationalist community introduce some bullshit to the memespace and we never get around to purging it and end up tanking our reputation by regularly appealing to it. Having this sort of post around helps get rid of it.
I'd also be interested in:
(Imagine if e.g. there is some nanotech that does something useful but also creates long-lasting poisonous pollution as a side-effect, for instance.)
I.e. is it sufficient safety that the AI isn't trying to kill us with nanotech? Or must it also be trying to not kill us?
This might be the most erudite chemistry post ever on Less Wrong. @Eric Drexler actually comments here on occasion; I wonder what he would have to say.
I have been trying to sum up my own thoughts without getting too deeply into it. I think I would emphasize first that the capabilities of plain old DNA-based bacteria are already pretty amazing - bacteria already live everywhere from the clouds to our bloodstreams - and if one is worried about what malevolent intent can accomplish on the nanoscale, they already provide reason to be worried. And I think @bhauth (author of this post) would agree with that.
Now, regarding the feasibility of an alternative kind of nanobot, with a hard solid exterior, a vacuum interior, and mechanical components... All the physical challenges are real enough, but I'm very wary of supposing that they can't be surmounted. For example, synthesis of diamondoid parts might sound impossibly laborious; then one reads about "direct conversion of CO2 to multi-layer graphene", and thinks, could you have a little nano "sandwich maker" that fills with CO2 (purified by filter), has just the right shape and charge distribution on its inner surfaces to be a s...
I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it. That said, let me now go through my thoughts on various points:
Rare materials: Yep, this is a real design constraint, but probably not that hard to design around? I'm not expecting nanobots to be made mostly out of iron.
Metal surfaces: Why not just build up the metal object in an oxygen-free environment, then add on an external passivation layer at then end? The passivation layer could be engineered to be more stable than the naturally occurring oxidation layer the metal would normally have. There would still be a minimum size for metal objects, of course. (Or more precisely, a minimal curvature.) Corrosion could definitely be a problem, but Cathodic protection might help in some cases.
Agree that electrostatic motors are the way to go here. I'm not sure the power supply necessarily has to be an ion gradient, nor that the motor would otherwise need to be made from metal. Metal might be actively bad, because it allows the electrons to slosh around a lot. What about this general scheme for a motor?: Wheel-shaped molecules, with sites that can hold electrons. A high voltage coming from
I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it.
Thanks, glad you liked it. You made quite the comment here, but I'll try to respond to most of it.
Metal surfaces: Why not just build up the metal object in an oxygen-free environment, then add on an external passivation layer at then end?
Without proteins carrying ions in water, this is difficult. The best version of what you're proposing is probably directed electrochemical deposition in some solvent that has a wide electrochemical window and can dissolve some metal ions. Such solvents would denature proteins.
...The passivation layer could be engineered to be more stable than the naturally occurring oxidation layer the metal would normally have. There would still be a minimum size for metal objects, of course. (Or more p
I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it. That said, let me now go through my thoughts on various points...
+1. I'd add that, besides the specific objections to points here, the overall argument of the post has a major conjunction problem: it only takes one or maybe two of the points to be wrong, in order for the end-to-end argument to fall apart. And a lot of these points do not have the sort of watertight argument which establishes anywhere near 90% confidence, and 90% per-step would already be on the low side for a chain with 10+ mostly-conjunctive steps.
On top of that, the end-to-end argument mostly seems to argue against some rather specific pictures (e.g. diamondoids, nano-3d printing), which are a lot narrower than "grey goo" in general.
So I think the actual headline argument is pretty weak. But even so, I strong-upvoted the post, because I love the object-level analysis of the individual points on their own.
One of the contentions of this post is that life has thoroughly explored the space of nanotech possibilities. This hypothesis makes the failures of novel nanotech proposals non independent. That said, I don’t think the post offers enough evidence to be highly confident in this proposition (the author might privately know enough to be more confident, but if so it’s not all in the post).
Separately, I can see myself thinking, when all is said and done, that Yudkowsky and Drexler are less reliable about nanotech than I previously thought (which was a modest level of reliability to begin with), even if there are some possibilities for novel nanotech missed or dismissed by this post. Though I think not everything has been said yet.
This is nice to see, I’ve been generally kind of unimpressed by what have felt like overly generous handwaves re: gray gooey nanobots, and I do think biological cells are probably our best comparison point for how nanobots might work in practice.
That said, I see some of the discussion here veering in the direction of brainstorming novel ways to do harm with biology, which we have a general norm against in the biosecurity community – just wanted to offer a nudge to y’all to consider the cost vs. benefit of sharing takes in that direction. Feel free to follow up with me over DM!
edit: (link)green goo is plausible
The AI can kill us and then take over with better optimized biotech very easily.
Nature can't do these things since they require substantial non-incremental design changes. Mosquitoes won't simultaneously get plant adapted needles + biological machinery to sort incoming proteins and cellular contents + continuous gr...
I'm puzzled that this post is being upvoted. The author does not sound familiar with Drexler's arguments in NanoSystems.
I don't think we should worry much about how nanotech might affect an AI's abilities, but this post does not seem helpful.
Organic Life is Unlikely
(list of reasons why any kind of organic life ought to be impossible, which must to some extent actually be correct because the Fermi Observation shows that it is extremely rare)
I don't really think this approach of listing a bunch of problems is a way to get a high level of certainty about this. In a certain sense, you should treat this like a math problem and insist on a formal proof that nanotech is impossible starting from the Schrodinger Equation. And of course, such a proof would have the very difficult task of ruling out nanotech without ruling out actual bacteria.
self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior
I think Wet Nanotech might qualify then.
Consider a minor modification to a natural microbe: a different genetic code. I.e., a codon still codes for an amino acid, but which corresponds to which could differ. (This correspondence is universal in natural life, with a few small exceptions.) Such an organism would effectively be immune to all of the viruses that would affect its natural counterpart, and no horizontal gene transfer to natural life would be possible.
One could also imagine further modifications. Greater resistance to mutations, perhaps using a more stable XNA and more repair genes. More types of amino acids. Reversed chirality of various biomolecules as compared to natural life, etc. Such an organism (with the appropriate enzymes) could digest natural life, but not the reverse.
There's nothing here that seems fundamentally incompatible with our understanding of biochemistry, but with enough of these changes, such an organism might then become an invasive species with a massive competitive advantage over natural life, ultimately resulting in an ecophagy scenario.
That has already happened naturally and also already been done artificially.
See this paper for reasons why codons are almost universal.
That third link seems to be full of woo.
Where was the optimization pressure for better designs supposed to have arisen in the "communal" phase?
Thus, we may speculate that the emergence of life should best be viewed in three phases, distinguished by the nature of their evolutionary dynamics. In the first phase, treated in the present article, life was very robust to ambiguity, but there was no fully unified innovation-sharing protocol. The ambiguity in this stage led inexorably to a dynamic from which a universal and optimized innovation-sharing protocol emerged, through a cooperative mechanism. In the second phase, the community rapidly developed complexity through the frictionless exchange of novelty enabled by the genetic code, a dynamic we recognize to be patently Lamarckian (19). With the increasing level of complexity there arose necessarily a lower tolerance of ambiguity, leading finally to a transition to a state wherein communal dynamics had to be suppressed and refinement superseded innovation. This Darwinian transition led to the third phase, which was dominated by vertical descent and characterized by the slow and tempered accumulation of complexity.
They claim that unive...
The new aminoacids might be "essential" (not manufacturable internally) and have to come in as "vitamins" potentially. This is another possible way to prevent gray goo on purpose, but hypothetically it might be possible to find ways to move that synthesis into the genome of neolife itself, if that was cheap and safe. These seem like engineering considerations that could change from project to project.
Mostly I have two fundamental points:
1) Existing life is not necessarily bio-chemically optimal because it currently exists within circumscribed bounds that can be transgressed. Those amino acids are weird and cool and might be helpful for something. Only one amino acid (and not even any of those... just anything) has to work to give "neo-life" some kind of durable competitive advantage over normal life.
2) All designs have to come from somewhere, with the optimization pressure supplied by some source, and it is not safe or wise to rely on random "naturally given" limits in the powers of systems that contain an internal open-ended optimization engine. When trying to do safety engineering, and trying to reconcile inherent safety with the design of something involving autonomous (potentia...
Just to be clear, a point which the post seems to take for granted, but which people not familiar with the topic might not think about, is:
Life is already selected for inclusive genetic fitness, so if nanobots do not unlock powerful capacities that life does not already have, then you cannot have a gray goo scenario because ordinary life will outcompete your nanobots for resources.
OP said:
I use "nanobots" to mean "self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior".
I think that there are lots of plausible “invasive species from hell” scenarios where an organism is sufficiently edited so as to have no natural viruses (because its genome is weird) and no natural predators (because its sugars are weird or it has an exotic new toxin) and so on. They would still have ecological niches where they wouldn’t be able to thrive, and they would still presumably get predators and diseases eventually. But a lot of destruction could happen in the meantime, including collapsing critical ecosystems etc., and it could happen fast (years not decades, but also not weeks) if the organism is introduced in lots of places at once, I would assume.
Those scenarios are important, but they’re not “nanobots” by OP’s definition.
OK, maybe you want to build some kind of mechanical computers too. Clearly, life doesn't require that for operation, but does that even work? Consider a mechanical computer indicating a position. It has some number, and the high bit corresponds to a large positional difference, which means you need a long lever, and then the force is too weak, so you'd need some mechanical amplifier. So that's a problem.
Drexler absolutely considered thermal noise. Rod logic uses rods at right angles whose positions allow or prevent movement of other rods. That's the amplif...
Drexler's calculations concern the thermal excitation of vibrations in logic rods, not the thermal excitation of their translational motion. Plugging his own numbers for dissipation into the fluctuation-dissipation relation, a typical thermal displacement of a rod during a cycle is going to be on the order of the 0.7nm error threshold for his proposed design in Nanosystems.
That dissipation is already at the limit (from Akhiezer damping) of what defect-free bulk diamond could theoretically achieve at the proposed frequency of operation even if somehow all thermoelastic damping, friction, and acoustic radiation could be engineered away. An assembly of non-bonded rods sliding against and colliding with one another ought to have something like 3 orders of magnitude worse noise and dissipation from fundamental processes alone, irrespective of clever engineering, as a lower bound. Assemblies like this in general, not just the nanomechanical computer, aren't going to operate with nanometer precision at room temperature.
I don't think this is the right mind frame, thinking about how something specific appears too hard or even infeasible. A better frame is "say, you are given $100B/year, can hire the best people in the world, and have 10 years to come up with viable self-replicating nanobots, or else we all die, how would you go about it?"
That framing is unnatural to me. I see "solving a problem" as being more like solving several mazes simultaneously. Finding or seeing dead ends in a maze is both a type of progress towards solving the maze and a type of progress towards knowing if the maze is solvable.
This means that the reactions you can do are limited to what organic compounds can do at relatively low temperatures - and existing life can pretty much do anything useful in that category already.
We find that bacteria sometimes do manage to work at higher temperatures as well. Thermus aquaticus that gave us Taq polymerase works for example at higher temperatures than most other bacteria.
Generally, it's very hard for eukaryotes or prokaryotes to evolve the usage of new amino acids. It's unclear what we could do with artificial designed proteins when ...
None of this argues that creating grey goo is an unlikely outcome, just that it's a hard problem. And we have an existence proof of at least one example of a way to make gray goo that covers a planet, which is life-as-we-know-it, which did exactly that.
But solving hard problems is a thing that happens, and unlike the speed of light, this limit isn't fundamental. It's more like the "proofs" that heavier than air flight is impossible which existed in the 1800s, or the current "proofs" that LLMs won't become AGIs - convincing until the counterexample exists, but not at all indicative that no counterexample does or could exist.
Thanks so much for this post, I've been wishing for something like this for a long time. I kept hearing people grumbling about how EY & Drexler were way too bullish about nanotech, but no one had any actual arguments. Now we have arguments & a comment section. :)
I object to the implication that Eliezer and Drexler have similar positions. Eliezer seems to seriously underestimate how hard nanotech is. Drexler has been pretty cautious about predicting how much research it would require.
Huh, interesting. I am skeptical. Drexler seems to have thought that ordinary human scientists could get to nanotech in his lifetime, if they made a great effort. Unless he's changed his mind about that, that means he agrees with Yudkowsky about nanotech, I think. (As I interpret him, Yudkowsky takes that claim and then adds the additional hypothesis that, in general, superintelligences will be able to do research several OOMs faster than human science, and thus e.g. "thirty years" becomes "a few days." If Drexler disagrees with this, fine, but it's not a disagreement about nanotech it's a disagreement about superintelligence.)
Can you say more about what you mean?
If it was advantageous to use structures of those inside cells for reactions somehow, then some organisms would already do that.
Not necessarily. The space of advantageous biologically possible structural configurations seems to me to be intuitively larger than the space of useful configurations currently known to be in use.
In order for a structure to be evolutionarily feasible, it must not only be advantageous but also there must be a path of individually beneficial (or at minimum not harmful) small steps in between it and currently existing structur...
low commit here but I've previously used nanotech as an example (rather than a probable outcome) of a class somewhat known unknowns - to portray possible future risks that we can imagine as possible while not being fully conceived. So while grey goo might be unlikely, it seems that precursor to grey goo of a pretty intelligent system trying to mess us up is the thing to be focused on, and this is one of its many possibilities that we can even imagine
Yes, I've actually seen people say that, but cells do use myosin to transport proteins sometimes. That uses a lot of energy, so it's only used for large things.
Cells have compartments with proteins that do related reactions. Some proteins form complexes that do multiple reaction steps. Existing life already does this to the extent that it makes sense to.
Humans or AI designing a transport/ compartmentalization system can go "how many compartments is optimal". Evolution doesn't work like this. It evolves a transport system to transport one specif...
>what if a superintelligence finds something I didn't think of?
I'm not a superintelligence, and I know of at least one plausible "green goo" scenario involving rogue microbes.
— Eliezer Yudkowsky
— Richard Smalley
— Eliezer Yudkowsky
In this post, I use "nanobots" to mean "self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior". Various specific differences from biological cells have been proposed. I've organized this post by those proposed differences.
1. localized melting
Most 3d printers melt material to extrude it through a nozzle. Large heat differences can't be maintained on a small scale.
2. rare materials
If a nanobot consists largely of something rare, getting more of that material to replicate is difficult outside controlled environments.
Growth of algae and bacteria is often limited by availability of iron, which is more common than most elements. Iron is the active catalytic site of many enzymes, and is needed by all known life. The growth of something made mostly of iron would be far more limited, and other metals have more limited availability than that.
3. metal surfaces
Melting material isn't feasible per (1), so material must be built up by adding to the surface. Since that's the case, the inside of structures must be chemically the same as what was the exterior.
Metal objects have a protective oxide layer. In an air or water environment, there's no way to add individual (eg) aluminum atoms to a metal surface and end up with metallic aluminum inside; the whole thing will typically be aluminum oxide or hydroxide.
Corrosion is also a proportionately bigger problem for smaller objects. A micrometer-scale metal structure will rapidly corrode, perhaps doing some Ostwald ripening.
4. electric motors
Normal "electric motors" are all electromagnetic motors, typically using ferromagnetic cores for windings. Bigger is better for those, up to at least the point where you can saturate cores.
On a very small scale, it's better to use electrostatic motors, and you can make MEMS electrostatic motors with lithography. (Not just theoretically; people actually do that.) But, per (2) & (3), bulk metals are a problem for a self-replicating system. If you need to have compounds floating around, electrical insulation is also difficult. You also need some way to switch current, and while small semiconductor switches are possible, per (3) building them is difficult.
Instead of electrostatic charge of metal objects, it's better to use ions. Ions could bind to some molecule, and electrostatic forces could cause that to rotate relative to another molecule. Hmm, this is starting to sound rather familiar.
5. inorganic catalysts
Lab chemistry and drug synthesis often use metal catalysts in solution, perhaps with a small ligand. Palladium acetate is used for making drugs, but it's very toxic to humans, because it...catalyzes reactions.
Life requires control of what happens, which means selective catalysis of reactions, which means molecules need to be selectively bound, which requires specific arrangements of hydrogen bond donors and acceptors and so on, and that requires organic compounds. Controlled catalysis requires organic compounds.
6. no liquid
Any self-replicating nanobot must have many internal components. If the interior is not filled with water, those components will clump together and be unable to move around, because electrostatic & dispersion interactions are proportionately much stronger on a small scale. The same is true to a lesser extent for the nanobots themselves.
Vacuum is even worse. Any self-replicating cell must move material between outside and multiple compartments. Gas leakage by the transporters would be inevitable. Cellular vacuum pumps would require too much energy and may be impossible. Also, strongly binding the compounds used (eg CO2) to carriers at every step would require too much energy. ("Too much energy" means too much to be competitive with normal biological processes.)
7. no water
Most enzymes maintain their shape because the interior is hydrophobic and the exterior is hydrophilic. If some polar solvent is used instead of water, then this stability is weakened; most organic solvents will denature most proteins. If you use a hydrophobic solvent, it can't dissolve ions or facilitate many reactions.
Ester and amide bonds are the best ways to reversibly connect organic molecules. Both involve making or taking water or alcohol. Alcohols have no advantages over water in terms of conditions where they're stable.
Water is by far the best choice of liquid. The effectiveness of water for dissolving ions is unique. Water can help catalyze reactions by donating and accepting hydrogen. Water is common on Earth, easy to get and easy to maintain levels of.
8. high temperatures
Per (5) you need organic molecules to selectively catalyze reactions.
Enzymes need to be able to change shape somewhat. Without conformational changes, enzymes can't grab their substrate well enough. Without conformational changes, there's no way to drive an unfavorable reaction with a favorable reaction, and that's necessary.
Because enzymes must be able to do conformational changes, they need to have some strong interactions and some weaker interactions that can be broken or shifted. Those weaker interactions can't hold molecules together at high temperatures. Some life can grow at 100 C but 200 C isn't possible.
This means that the reactions you can do are limited to what organic compounds can do at relatively low temperatures - and existing life can pretty much do anything useful in that category already.
9. diamond
It's possible to make molecules containing diamond structures at ambient temperature. The synthesis involves carbocations or carboanions or carbon radicals, which are all very unstable. The yields are mediocre and the compounds involved are reactive enough to destroy any conceivable enzymes.
Some people have simulated structures that could theoretically place carbon atoms on diamond in specific positions at ambient temperature. Here's a paper on that. Because diamond is so kinetically stable, the synthesis must be exothermic, with high-energy intermediates. So, high vacuum is required, which per (6) doesn't work.
Also, the chemicals consumed to make those high-energy intermediates are too reactive to plausibly be made by any enzyme-like system. And per (1) & (8) you can't use high temperatures to make them on a small scale.
Also, there is no way to later remove carbon atoms from the diamond at low temperature. How, then, would a nanobot with a diamond shell replicate?
10. other rigid materials
CaCO3, silica, and apatite are much easier to manipulate than diamond. They're used in (respectively) mollusk shells, diatom frustules, and bone.
If it was advantageous to use structures of those inside cells for reactions somehow, then some organisms would already do that. Enzymes generally must do conformational changes to catalyze reactions. A completely rigid diamond shape with functional groups would not make a particularly good enzyme.
And of course, just a small solid shape, with nothing attached to it - even if you can make arbitary shapes - isn't useful for much besides cell scaffolding, and even then, building diatom frustules out of linked diamond pieces seems worse than what they do now with silica. Sure, diamond is even stronger than silica, but that doesn't matter. And that's assuming you can make interlocking diamond pieces, which you can't.
11. 3d structures
Yes, believe it or not, I've seen people say that. But cells have eg microfilaments.
Again, enzymes must be able to do conformational changes to work. At ambient temperature, that means they're shaking violently, and if proteins are flopping around constantly, you can't have a rigid positioner move to a fixed position and assume you're placing something correctly.
What you can do is hold onto the end of a linear chain as you extrude it, then fold up that chain into a 3d structure. What you can do is use an enzyme that binds to 2 folded proteins and connects them together. And those are methods that are used by all known life.
12. active transport
Yes, I've actually seen people say that, but cells do use myosin to transport proteins sometimes. That uses a lot of energy, so it's only used for large things.
13. combining reaction steps
Cells have compartments with proteins that do related reactions. Some proteins form complexes that do multiple reaction steps. Existing life already does this to the extent that it makes sense to.
14. positional nanoassembly
The above sections should be enough background to finally cover what's perhaps the most central concept of the genre of proposals called "nanobots".
Some people see 3d printers and CNC routers, and don't understand enzymes or what changes on a molecular scale very well, and think that cells that work more like 3d printers or gantry cranes would be better. Now, a FDM 3d printer has several components:
Protein-sized position sensors don't exist.
Molecular linear motors do exist, but 1 ATP (or other energy carrier) is needed for every step taken.
If you want to catalyze reactions, you need floppy enzymes. Even if you attach them to a rigid bed, they'll flop all over the place. (On a microscopic scale, normal temperatures are like a macroscopic 3d printer being shaken violently.)
Suppose you're printing diamond somehow. You need a seed that's rigidly connected to the printing mechanism. The connection would need to be removable in order to detach the product from the printer. In a large 3d printer, you can peel plastic off a metal surface, but that won't work for covalently bonded diamond. You would need a diamond seed with functional groups that allow it to be grabbed, and since you're not starting with a sheet, you'd need a 5-axis printer arm.
Drexler wrote a book that proposed mechanical computers which control positioner arms by lever assemblies. An obvious problem there is mechanical wear - yes, some MEMS devices have adequate lifetimes, but those just vibrate; their sliding friction is negligible. But suppose you can solve this by making everything out of diamond or using something like lubricin.
So, suppose you have a mechanical computer that moves arms that control placement of something. Diamond is impractical, so let's say silica is being placed. Whatever you're placing, you need chemical intermediates that go on the arms, and you need energy to power everything. Making energy from fuel or photosynthesis requires more specific chemicals, not just specific arrangements of some solid. To do the reactions needed for energy and intermediate production, you need things that can do conformational changes - enzymes.
Without conformational changes, enzymes can't grab their substrate well enough. Without conformational changes, there's no way to drive an unfavorable reaction with a favorable reaction, and that's necessary. You can't just use rigid positioners to drive reactions that way, because they have no way to sense that the reaction has happened or not...except through conformational changes of a flexible enzyme-like tooltip on the positioner, which would have the same issues here.
At ambient temperature, enzymes that can do conformational changes are shaking violently, and if proteins are flopping around constantly, you can't have a rigid positioner move to a fixed position and assume you're placing something correctly. Since you need enzymes, you need a ribosome, and production of monomers - and amino acids are the best choice, chemical elements are limited and there is no superior alternative.
Since all that is still needed, what are the positioners actually accomplishing? They'd only be needed to build positioners. The whole thing would be a redundant side system to enzymatic life.
OK, maybe you want to build some kind of mechanical computers too. Clearly, life doesn't require that for operation, but does that even work? Consider a mechanical computer indicating a position. It has some number, and the high bit corresponds to a large positional difference, which means you need a long lever, and then the force is too weak, so you'd need some mechanical amplifier. So that's a problem.
Consider also that as vacuum is impractical per (6), and enzymes and chemical intermediates are needed, you'd have stuff floating around. So you have all these moving parts, they need to interface with the enzymes so they can't just be separated by a solid barrier, and stuff could get in there and jam the system.
The problems are myriad, and I'd be well-positioned to see solutions if any existed. But suppose you solve them and make tiny mechanical computers in cells - what's the hypothetical advantage of that? The ability to "do computation"? Brains are more energy-efficient than semiconductor computers for many tasks, and the total embodied computation in cells is far greater than that of neurons' occasional spikes.
15. everything else
When someone has an idea about something cells could do, it's often reasonable to presume that it's either impossible, useless, or already used by some organism - but there are obviously cases where improvement is possible. It's certainly physically possible to correct harmful mutations with genetic engineering. There are also ongoing arms races between pathogens and hosts where each step is an informational problem.
But what about more basic mechanisms? Have basic mechanisms for typical Earth conditions been optimized to the point that no improvement is possible? That depends on their complexity. For example, glycolysis and the citric acid cycle are optimal, but here's a more-efficient CO2 fixation pathway I designed. (Yes, you'd want to assimilate the glycolaldehyde synthons by (erythrose 4-phosphate -> glucose 6-phosphate -> 2x erythrose 4-phosphate). I left that as a way for people to show they understood my blog.) See also my post on the origin of life for some reasons life works the way it does. (You can see I'm a big blogger - that's a good career plan, right?)
I wrote this post now as a sort of side note to my post on AI risks. But...what if a superintelligence finds something I didn't think of?
I know, right? What if it finds a way to travel faster than light and sets up in Alpha Centauri, then comes back? What if it finds a way to make unlimited free energy? What if it finds a friendly unicorn that grants it 3 wishes?
There's a gap between seeing that something is conceivably possible and seeing how to do it, and that's the only reason that things like research and planning and prediction about the future are possible. I understand Eliezer Yudkowsky thinks that someone a little smarter than von Neumann (who didn't invent the "von Neumann architecture" or half the other stuff he took credit for, but that's off topic) would be able to invent "grey goo" type nanobots. If that was the case, even I would at least be able to see how it would be done.
To be clear, I'm not trying to imply that a superintelligent AI wouldn't have any plausible route to taking over societies or killing most of humanity or various other undesirable outcomes. I'm only saying that worrying about "grey goo" is a waste of time. On the other hand, Smalley was mad at Drexler for scaring people away from research into carbon nanotubes, but carbon nanotubes would be a health hazard if they were used widely, and the applications Smalley hoped for weren't practical. Perhaps I would thank Drexler if he actually pushed people away from working on carbon nanotubes, but he didn't.