This post reads like it wants to convince its readers that AGI is near/will spell doom, picking and spelling out arguments in a biased way.
Just because many ppl on the Forum and LW (including myself) believe that AI Safety is very important and isn't given enough attention by important actors, I don't want to lower our standards for good arguments in favor of more AI Safety.
Some parts of the post that I find lacking:
..."We don’t have any obstacle left in mind that we don’t expect to get overcome in more than 6 months after efforts are invested to
I think it'd be good to cross-post this on the EA Forum.
edit: It's been posted, link here: https://forum.effectivealtruism.org/posts/zLkdQRFBeyyMLKoNj/still-no-strong-evidence-that-llms-increase-bioterrorism