Strongly agree. Fundamentally, as long as models don't have more direct access to the world, there are a variety of failure modes that are inescapable. But solving that creates huge new risks as well! (As discussed in my recent preprint; https://philpapers.org/rec/MANLMH )
The macroscopic biotech that accomplishes what you're positing is addressed in the first part, and the earlier comment where I note that you're assuming ASI level understanding of bio for exploring an exponential design space for something that isn't guaranteed to be possible. The difficulty isn't unclear, it's understood not to bebfeasible.
Given the premises, I guess I'm willing to grant that this isn't a silly extrapolation, and absent them it seems like you basically agree with the post?
However, I have a few notes on why I'd reject your premises.
On your first idea, I think high-fidelity biology simulators require so much understanding of biology that they are subsequent to ASI, rather than a replacement. And even then, you're still trying to find something by searching an exponential design space - which is nontrivial even for AGI with feasible amounts of "unlimited" compute. Not only that, but the thing you're looking for needs to do a bunch of stuff that probably isn't feasible due to fundamental barriers (Not identical to the ones listed there, but closely related to them.)
On your second idea, a software-only singularity assumes that there is a giant compute overhang for some specific buildable general AI that doesn't even require specialized hardware. Maybe so, but I'm skeptical; the brain can't be simulated directly via Deep NNs, which is what current hardware is optimized for. And if some other hardware architecture using currently feasible levels of compute is devised, there still needs to be a massive build-out of these new chips - which then allows "enough compute has been manufactured that nanotech-level things can be developed." But that means you again assume that arbitrary nanotech is feasible, which could be true, but as the other link notes, certainly isn't anything like obvious.
How strong a superintelligence are you assuming, and what path did it follow? If it's already taken over mass production of chips to the extent that it can massively build out its own capabilities, we're past the point of industrial explosion. And if not, where did these (evidently far stronger than even the collective abilities of humanity, given the presumed capabilities,) capabilities emerge from?
I will point out that this post is not the explicit argument or discussion of the paper - I'd be happy to discuss the technical oversight issues as well, but here I wanted to make the broader point.
In the paper, we do make these points, but situate the argument in terms of the difference between oversight and control, which is important for building the legal and standards arguments for oversight. (The hope I have is that putting better standards and rules in place will reduce the ability of AI developers and deployers to unknowingly and/or irresponsibly claim oversight when it's not occurring, or may not actually be possible.)