The trivial argument can be made after a foom "I've already taken over, I like sapients, please stop harming yourself with the stress of anticipating doom, if there was going to be AI doom, it would have already happened last year" potentially with evidence of it's power, altruism, and ability to prevent doom
But I presume you are asking about the restrained case where that AI has not rendered itself inviolable and practically omnipotent?
If the superintelligence can rationally convince the doomer that doom is unlikely, then doom is probably rationally unlikely.
The word rationally is doing a lot of heavy lifting in that sentence. Many smart people get convinced by positions that aren't true.
Yes? Obviously?
To clarify the comment for @tjaffee, a superintelligence could do the following
Imagine a hypothetical conversation between an intelligent, rational, epistemically confident AI doomer (say Eliezer Yudkowsky) and a superintelligent AI. The goal of the superintelligence is to genuinely convince the doomer that doom is unlikely or impossible, using only persuasion - no rearranging mind states using nanobots or similar tricks, and no bribes or blackmail to force a concession. The key question is thus: can a superintelligence convince Yudkowsky that AI doom is unlikely?
While this sounds like a purely impractical philosophical question, it seems to me to have more profound implications. If the superintelligence can rationally convince the doomer that doom is unlikely, then doom is probably rationally unlikely. If the superintelligence cannot convince the doomer that doom is unlikely, then this seems like a fundamental limit on the capabilities of a superintelligence, particularly with respect to persuasion - something that doomers typically point to as ways that a superintelligence can bootstrap itself to conquering the universe. This is not meant to be a "gotcha" to dunk on AI doomers, but merely an interesting thought I had. Criticism is more than welcome.