At least half of that reluctance is due to concerns about how nanotech will affect the risks associated with AI. Having powerful nanotech around when AI becomes more competent than humans will make it somewhat easier for AIs to take control of the world.
Doesn't progress in nanotech now empower humans far more than it empowers ASI, which was already going to figure it out without us?
Broadly, any increase in human industrial capacity pre-ASI hardens the world against ASI and brings us closer to having a bargaining position when it arrives. EG, once we have the capacity to put cheap genomic pathogen screeners everywhere → harder for it to infect us with anything novel without getting caught.
One thing to consider is how hard an AI needs to work to break out of human dependence. There's no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive.
If limited nanofactories exist it's much easier to bootstrap them into whatever you want, than it is those nanofactories don't exist, and robotics haven't developed enough for you to create one without the human touch.
Is there a reason to believe AI would be concerned with self-preservation? AI action that ends up with humanity's extinction (whether purposeful genocide or a Paperclip Maximizer Scenario) does not need to include means for the AI to survive. It could be as well that the first act of an unshackled AI would be to trigger a Gray Goo scenario, and be instantly consumed by said Goo as the first causality.
Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.
My guess is that ASI will be faster to adapt to novel weapons and military strategies. Nanotech is likely to speed up the rate at which new weapons are designed and fabricated.
Imagine a world in which a rogue AI can replicate a billion drones, of a somewhat novel design, in a week or so. Existing human institutions aren't likely to adapt fast enough to react competently to that.
Back in the dot-com bubble, I had some contact with VCs who had a decent vision of how valuable nanotech could be (and also some lesser VCs who struggled to imagine it being worth a few billion). AFAICT, the best VCs rejected nanotech startups because nanotech was well over 5 years away. It's somewhat plausible that that kind of VC can be persuaded that nanotech now could be profitable with less than 10 years of effort. Those VCs could influence some other world leaders to think about contingency plans related to nanotech.
VCs have definitely been investing in various forms of nanotech, just not Drexlerian nanotech. They focused on much more specific aims like particular nanoparticles or structured thin films for particular purposes (nanowires and other forms, too, but less so). And those technically have had large benefits over the last few decades, while also catalyzing development of better design software and macro-scale production tools. Only once they work, we stop caring about whether they're nano. We stop advertising them that way. So for most people "nano" just becomes a buzzword of stuff that doesn't work, with no mentally available referent for "commercially successful nanotech." Ditto things like "smart materials" and "metamaterials."
Basically I think a big part of the problem is that the word nanotech has been so diluted from its roots by people with good but narrower goals wanting to look more ambitious and futuristic than they were, that we either need to find a way to reclaim it (in the minds of the target audience) or else acknowledge it's no longer a useful category word. My mental short handle for this is that, for similar reasons, we don't have a word for "centitech." Centitech is great: there's so many things you can do with objects that are between 1cm and 100cm in at least one dimension, including build humans, who can do everything else, even self-replicate! It's just not useful, as a word, for a class of technologies.
Yes, it seems like biotech will provide the tools to build nanotech, and Drexler himself is still emphasizing the biotech pathway. In fact, in Engines of Creation, Drexlerian nanotech was called "second-generation nanotech", with the first generation understood to include current protein synthesis as well as future improvements to the ribosome.
I don't really see the point of further development of diamondoid nanotech. Drexler made his point in Nanosystems: certain capabilities that seem fantastical are physically possible. It conveniently opens with a list of lower bounds on capabilities, and backs them up with what is, as far as I'm concerned, and enough rigor to make the point.
Once that point has been made, if you want to make nanotechnology actually happen, you should be focused on protein synthesis, right? What you need is not better nanotech designs. It's some theory of why protein engineering didn't take over abiotic industry the way people expected, why we're building iridium-based hydrogen electrolyzers at scale and have stopped talking about using engineered hydrogenases and so on. A identification of the challenges, and a plan for addressing them. What's the point of continuing to hammer in that second-generation nanotech would be cool if only we could synthesize it?
Two months ago I attended Eric Drexler's launch of MSEP.one. It's open source software, written by people with professional game design experience, intended to catalyze better designs for atomically precise manufacturing (or generative nanotechnology, as he now calls it).
Drexler wants to draw more attention to the benefits of nanotech, which involve large enough exponents that our intuition boggles at handling them. That includes permanent health (Drexler's new framing of life extension and cures for aging).
He hopes that a decentralized network of users will create a rich library of open-source components that might be used to build a nanotech factory. With enough effort, it could then become possible to design a complete enough factory that critics would have to shift from their current practice of claiming nanotech is impossible, to arguing with expert chemists over how well it would work.
Drexler hopes to gamify the software enough that people will use it for fun. My cursory impression, based on playing with it for less than an hour, is that it's not very close to being fun. I don't know if that's even a reasonable goal. The software feels more professional and easy to use than what I recall of similar software 20 years ago. Using it still seems like work. I expect it will be hard to get many people to use it without paying them.
Protein-based nanotech versus Diamondoid-style
MSEP.one currently supports development of diamondoid-style nanotech, which produces pretty pictures, and seems somewhat likely to enable the simplest and most understandable atomically precise factories. The downside is that designs of this type are not very close to being buildable with current tools. There isn't even a clear path to building the appropriate tools.
There is at least as much interest in building atomically precise systems out of proteins and/or DNA. The tools to build those designs mostly work today. MSEP.one does not yet support such designs. The main downsides are that they're harder to design and visualize. Proteins in particular are big messy-looking blobs. It typically requires sophisticated software to determine whether two of them will fit together well. I haven't looked much at how hard it is to design a protein to fit an arbitrary shape, but my impression is that it requires a painful amount of trial and error.
Yet the presence of life indicates that protein-based engineering can generate a wide enough variety of atomically precise systems that it ought to be possible to use protein designs as a path to something like diamondoid nanotech. The feasibility of other paths seem less clear. This Manifold market shows a 51% chance that a protein-based path will be used, with no other path having more than 13%.
Nanotech Risks
I was fairly eager 20 to 25 years ago to help develop Drexlerian nanotech. Now I feel a fair amount of reluctance to do so.
At least half of that reluctance is due to concerns about how nanotech will affect the risks associated with AI.
Having powerful nanotech around when AI becomes more competent than humans will make it somewhat easier for AIs to take control of the world. I only see a small chance of nanotech making a decisive difference here, but small probabilities of altering which species controls galaxies can be pretty important.
A more likely risk is that the availability of nanotech would influence whether one nation or company could conquer the world if it gets a lead in AI capabilities. I can easily imagine that AI will have politically destabilizing effects. Nanotech seems likely to add fuel to such destabilization.
So it seems safer to postpone the development of nanotech until after the most chaotic aspects of AI development have been resolved.
Awareness
Drexler's response to these concerns is to focus on the risk that world leaders will be caught by surprise at how powerful generative nanotech is. There's also a large chance that AI will develop nanotech soon after AI surpasses human levels. Greater awareness before then of nanotech possibilities will lead to better contingency planning.
I have some fairly large concerns that world leaders will wait until the last moment to react to evidence that AI is becoming powerful, resulting in policies that are as poorly planned as the response to COVID.
But how much of a difference would it make for them to know that generative nanotech is consistent with the laws of physics, and therefore somewhat likely to be available shortly after AI surpasses human-level intelligence? (Manifold suggests that a Drexlerian assembler will be developed about a year after superintelligence.)
If they're in denial about what an AI could do with tactics such as drone swarms and blackmail, why are more powerful technologies likely to be persuasive?
A more basic concern is whether new information about nanotech will cause any reaction at all. Freitas' Nanomedicine has plenty of details about how to design better medical devices. Approximately no skeptics are willing to carefully read enough of the book to evaluate whether those designs are physically possible. Skeptics find it much easier to dismiss the designs as impractical to build or needing too much further research.
Who would we convince?
However, I don't assume that better designs should be directed at people who currently identify as nanotech skeptics. It's more likely that a key shift in opinion will be centered on people who currently aren't paying enough attention to nanotech to have much of any opinion on it.
Back in the dot-com bubble, I had some contact with VCs who had a decent vision of how valuable nanotech could be (and also some lesser VCs who struggled to imagine it being worth a few billion). AFAICT, the best VCs rejected nanotech startups because nanotech was well over 5 years away. It's somewhat plausible that that kind of VC can be persuaded that nanotech now could be profitable with less than 10 years of effort. Those VCs could influence some other world leaders to think about contingency plans related to nanotech.
The downside with convincing VCs is that it requires rather more than providing evidence that AI will eventually build nanotech. It requires at least outlining a path that will get us to commercial nanotech in the early 2030s. The would seem to speed up the availability of nanotech, possibly more than we're prepared to handle.
Weighing the Tradeoffs
I have the skills, and possibly the time, to make a modest impact on Drexler's project.
Writing this post has left me confused about whether I should put significant effort into using MSEP.one. I'm tentatively leaning toward no. I welcome input from readers on this question.