Less than 1%? You need to go to fairly ridiculous extremes to get that level of certainty for even things that we know a lot about. I think that level of certainty for any such question is beyond any rational argument with current evidence.
There are plenty of arguments that would work, if only you held some particular prior belief to that level. A rock-solid belief in a higher power that would prevent it would suffice - for those who believe in such a thing at a 99%+ level. A similarly strong belief that AGI is actually impossible and not just difficult would also suffice.
There are many good arguments, but not that particular "<1% probability" proof that the question requests. All the good arguments rely on uncertain assumptions, don't reach the requisite standard of proof, especially when considered together with the assumptions.
So by answering this way you are steelmanning the question (which it desperately needs).
An anonymous academic wrote a review of Joe Carlsmith's 'Is power seeking AI an existential risk?', in which the reviewer assigns for <1/100,000 probability of AI existential risk. The arguments given aren't very good imo, but maybe worth reading.
If we were to respond specifically to the title of the post....
What is the best critique of AI existential risk arguments?
I would cast my vote for the premise that AI risk arguments don't really matter so long as a knowledge explosion feeding back upon itself is generating ever more, ever larger powers, at an ever accelerating rate.
For example, let's assume for the moment that 1) AI is an existential risk, and 2) we solve that problem somehow so that AI becomes perfectly safe. Why would that matter if civilization is then crushed when we lose control of some other power emerging from the knowledge explosion? Remember, triumphing over existential risk will require us to win every single time, and never lose once.
If it's true that 1) the knowledge explosion is accelerating, and if it's true that 2) human ability is limited, then it follows that at some point we will be overwhelmed by one or more challenges that we can't adapt to in time.
Seventy five years after Hiroshima we still have no idea what to do about nuclear weapons, nor do we know what to do about AI, or genetic engineering. And the threats keep coming, more and more, larger and larger, faster and faster.
If it is our choice to accept an ever accelerating knowledge explosion as a given, the best critique of AI existential risk arguments seems to be that they don't really matter. Or, if you prefer, that they are a distraction from what does matter.
Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response - from the whole of civilization? just from the intelligentsia? It's unclear to me if you think you already have a solution.
You're also saying that focus on AI safety is a mistake, compared with focus on this larger recurring process, of dangerous new technologies emerging thanks to the process of discovery.
There are in fact good argume...
the claim that there is a <1% probability of AI existential risk
That's a serious constraint. What possible argument that's not literally a demonstration of a working AGI is going to do that to the epistemic state about a question this confusing? Imagining a future where AI is not an existential risk is easy (and there are many good arguments for it being more likely than one would expect, just as there are many good arguments for it being less likely than one would expect). But imagining a present where it's known to not be an existential risk with 99% probability (or 1% probability), despite not having already been built, doesn't work for me.
Maybe there is 0.1% probability (I sorta tried to actually assess the order of magnitude for this number) that in 15 years the world's state of knowledge builds up to a point where that epistemic state becomes thinkable (conditional on actual AGI not having been built). This would most likely require shockingly better alignment theory and expectation that less aligned AGIs can't (as in alignment-by-default) or won't be built first.
The two questions you pose are not equivalent. There are critiques of AI existential risk arguments. Some of them are fairly strong. I am unaware of any which do a good job of quantifying the odds of AI existential risk. In addition, your second question appears to be asking for a cumulative probability. It's hard to see how you can provide that absent a mechanism for eventually cutting AI existential risk to zero...which seems difficult.
If you could link to an article or other piece of media, that would be ideal. Writing one up here is fine as well. An equivalent question would be "what is the best argument for the claim that there is a <1% probability of AI existential risk?"