"Either they’re perfectly doable by humans in the present, with no AGI help necessary."
So, your argument about why this is a relevant statement is that AI isn't adding danger? That seems to me to be using a really odd standard for "perfectly doable" .. the actual number of humans who could do those things is not huge, and humans don't usually want to.
Like either ending the world is easy for humans, in which AI is dangerous because it will want to, or its hard for humans in which case AI is dangerous because it will do them better.
I don't think that works to dismiss that category of risk.
So what should I do with this information, like what other option than "nod along and go on living their lives" is there for me?
I don't believe that infinite gambles are a thing. In fact they feel almost self evidently at best an approximation.
All of these seem pretty cold tea, as in true but not contrarian.