Molecular nanotechnology, or MNT for those of you who love acronyms, seems to be a fairly common trope on LW and related literature. It's not really clear to me why. In many of the examples of "How could AI's help us" or "How could AI's rise to power" phrases like "cracks protein folding" or "making a block of diamond is just as easy as making a block of coal" are thrown about in ways that make me very very uncomfortable. Maybe it's all true, maybe I'm just late to the transhumanist party and the obviousness of this information was with my invitation that got lost in the mail, but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.
I must post the disclaimer that I have done a little bit of materials science, so maybe I'm just annoyed that you're making me obsolete, but I don't see why this particular possible future gets so much attention. Let us assume that a smarter than human AI will be very difficult to control and represents a large positive or negative utility for the entirety of the human race. Even given that assumption, it's still not clear to me that MNT is a likely element of the future. It isn't clear to me than MNT is physically practical. I don't doubt that it can be done. I don't doubt that very clever metastable arrangements of atoms with novel properties can be dreamed up. Indeed, that's my day job, but I have a hard time believing the only reason you can't make a nanoassembler capable of arbitrary manipulations out of a handful of bottles you ordered from Sigma-Aldrich is because we're just not smart enough. Manipulating individuals atoms means climbing huge binding energy curves, it's an enormously steep, enormously complicated energy landscape, and the Schrodinger Equation scales very very poorly as you add additional particles and degrees of freedom. Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Maybe a super human intelligence is capable of doing so, but it's not at all clear to me that it's even possible.
I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible via adding burdensome details. I understand that AI and MNT is less probable than AI or MNT alone, but that both is supposed to sound more plausible. This is precisely where I have difficulty. I would estimate the probability of molecular nanotechnology (in the form of programmable replicators, grey goo, and the like) as lower than the probability of human or super human level AI. I can think of all sorts of objection to the former, but very few objections to the latter. Including MNT as a consequence of AI, especially including it without addressing any of the fundamental difficulties of MNT, I would argue harms the credibility of AI researchers. It makes me nervous about sharing FAI literature with people I work with, and it continues to bother me.
I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess. I don't think convincing people that smarter than human AI's have enormous potential for good and evil is particularly difficult, once you can get them to concede that smarter than human AIs are possible. I do think that waving your hands and saying super-intelligence at things that may be physically impossible makes the whole endeavor seem less serious. If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.
Put in LW parlance, suggesting things not known to be possible by modern physics without detailed explanations puts you in the reference class "people on the internet who have their own ideas about physics". It didn't help, in my particular case, that one of my first interactions on LW was in fact with someone who appears to have their own view about a continuous version of quantum mechanics.
And maybe it's just me. Maybe this did not bother anyone else, and it's an incredible shortcut for getting people to realize just how different a future a greater than human intelligence makes possible and there is no better example. It does alarm me though, because I think that physicists and the kind of people who notice and get uncomfortable when you start invoking magic in your explanations may be the kind of people FAI is trying to attract.
I'm commenting a few days after the main flurry of discussion and just wanted to raise a concern about how there seems to be a conflation in the OP and in many of the comments between (1) effective political advocacy among ignorant people who will stick with the results that fall out of the absurdity heuristic even when it gives false results and (2) truth seeking analysis based on detailed mechanistic considerations of how the world is likely to work.
Consider the 2x2 grid where, on one axis, we're working in either an epistemically unhygienic advocacy frame where its OK to say false things that get people to support the right conclusion or policy (versus a truth seeking frame where you grind from the facts to the conclusion with high quality reasoning processes at each stage for the sake of figuring stuff out from scratch) and on the second axis Leplen's dismissal of MNT is coherently founded and on the right track (versus it just being a misfiring absurdity heuristic).
I think in this forum it can be generally assumed that "FAI is important" as the background conclusion that is also a message that it is probably beneficial to advocate on behalf of.
Leplen's claim here is a claim about Leplen's historically contingent reasoning processes rather than about the object level workabilty of MNT and it is raised as though Leplen is a fairly normal person whose historically likely reaction to MNT is common enough to be indicative of how it will play with many other people. So the part of the 2x2 grid it is from is firmly "advocacy rather than truth" and mostly assuming "Leplen's reaction is justified". I think it is worth spelling out what it would look like to explore the other three boxes in the 2x2 grid.
If we retain the FAI-promoting advocacy perspective but imagine that Leplen is wrong because "MNT magic" is actually something future scientists or an AGI could pull together and deploy, then the substantive cost to the world might be that courses of action that are important if MNT is a real concern may not be well addressed by the group of people who may have been mobilized by a "just FAI, not MNT" advocacy. If basically the same AGI-safety strategy is appropriate whether or not an AGI would head towards MNT as a lower bound on the speed and power of the weapons it could invent, then dropping MNT from the advocacy can't really harm anything. If the appropriate policies are different enough that lots of people convinced of "FAI without MNT" would object to "FAI with MNT" protection measures, then dropping advocacy could be net harmful to the world.
If we retain the idea that Leplen's dismissal of MNT is coherent and justified, but flip over to a truth-seeking frame (while retaining awareness of a background belief by many old time LWers that MNT is probably important to think about) then the arguments offered to help actually change people's minds for coherent reasons seem lacking. From a truth seeking perspective it doesn't matter what turns people on or off if their opinions aren't themselves important indicators of how the world actually is. The only formal credential offered is in materials science, and this is raised from within an activist advocacy frame where Leplen admits that motivated cognition could account for their attitude with respect to MNT out of defensiveness and a desire to not have skills become obsolete. Lots of people don't want to become obsolete, so this is useful evidence for figuring out how to convince similarly fearful people of a conclusion about the importance of FAI by dropping other things that might make FAI advocacy harder. But the claim that "MNT is unimportant based on object level science considerations" will be mostly unmoved by the advocacy level arguments here if someone already has chemistry experience, and has read Nanosystems, and still thinks MNT matters. Something else would need to be offered than hand waving and a report about emotional antibodies to a certain topic. So presuming that Leplen's dismissal of MNT is on track, and that many LWers think MNT is important, it seems like there's an education gap, where the LW mainstream could be significantly helped by learning the object level reasoning that justify Leplen's dismissal of MNT. Like where (presuming that it went off the rails somewhere) did Nanosystems go off the rails?
The fourth and final box of the 2x2 grid is for wondering what things would look like if we were in a truth seeking and communal learning mode (not worried about advocacy among random people) and Leplen was wrong to dismiss MNT. In this mode the admixture of advocacy and truth while taking Leplen seriously seems pretty bad because the very local educational process this week on this website would be going awry. It is understandable that Leplen's reaction is relevant to one of LW's central advocacy issues and Leplen seems is friendly to that project... and yet from the perspective of an attempt to build community knowledge in the direction of taking serious things seriously and believing true things for good reasons while disbelieving false things when the evidence pushes that way... the conflation is mildly disturbing.
This is a bad argument. Like it doesn't even taken into account the distinctions between bootstrapping from scratch to a single working general assembler versus how it would work assuming the key atoms could be put into the right places once (like whether and how expensively it could build copies of itself). The "bootstrap difficulty" question and the "mature scaleout" questions are different questions and our discussion seems to be papering over the distinctions. The badness of this argument was gently pointed out by drethelin, but somehow not in a way that was highly upvoted, I suspect because it didn't take the (probably?) praiseworthy advocacy concerns into account.
To be clear, I'm friendly to the idea that MNT might not be physically possible, or if possible it might not be efficient. I'm not a huge expert here at all and would like to be better educated on the subject. And I'm friendly to the idea of designing AGI advocacy messages that gain traction and motivate people to do things that actually improve the world. I'm just trying to point out that mixing both of these concerns into the same rhetorical ball, seems to do a disservice to both...
Which is pretty ironic, considering that "mixing FAI and MNT together seems politically problematic" seems to be the general claim of the article. Mostly I guess I'm just trying to say that this article is even more complicated because now instead of sometimes doping the FAI discussions with MNT, we're fully admixing FAI and MNT and political advocacy.
It is possible to have expert experience in chemistry and to find MNT preposterous for reasons derived from that experience. In fact, it's a common reaction; not totally universal, but very common. And the second quote from leplen sums up why, quite nicely and accurately. Even if one trusts the calculations in Nanosystems regarding the stability of the various structures on display there, they will still look like complete fantasy to someone used to ordinary methods of chemical synthesis, which really do resemble "shaking a large bin of lego in a part... (read more)