Epistemic status: very speculative
Content warning: if true this is pretty depressing
This came to me when thinking about Eliezer's note on Twitter that he didn't think superintelligence could do FTL, partially because of Fermi Paradox issues. I think Eliezer made a mistake, there; superintelligent AI with (light-cone-breaking, as opposed to within-light-cone-of-creation) FTL, if you game it out the whole way, actually mostly solves the Fermi Paradox.
I am, of course, aware that UFAI cannot be the Great Filter in a normal sense; the UFAI itself is a potentially-expanding technological civilisation.
But. If a UFAI is expanding at FTL, then it conquers and optimises the entire universe within a potentially-rather-short timeframe (even potentially a negative timeframe at long distances, if the only cosmic-censorship limit is closing a loop). That means the future becomes unobservable; no-one exists then (perhaps not even the AI, if it is not conscious or if it optimises its consciousness away after succeeding). Hence, by the anthropic principle, we should expect to be either the first or extremely close to it (and AIUI, frequency arguments like those in the Grabby Aliens paper suggest that "first in entire universe" should usually be significantly ahead of successors relative to time elapsed since Big Bang).
This is sort of an inverse version of Deadly Probes (which has been basically ruled out in the normal-Great-Filter sense, AIUI, by "if this is true we should be dead" concerns); we are, in this hypothesis, fated to release Deadly Probes that kill everything in the universe, which prevents observations except our own observations of nothing. It also resurrects the Doomsday Argument, as in this scenario there are never any sentient aliens anywhere or anywhen to drown out the doom signal; indeed, insofar as you believe it, the Doomsday Argument would appear to argue for this scenario being true.
Obvious holes in this:
1) FTL may be impossible, or limited to non-light-cone-breaking versions (e.g. wormholes that have to be towed at STL). Without light-cone-breaking FTL there are non-first species and non-Fermi-Paradox observations even with UFAI catastrophe being inevitable.
2) The universe might be too large for exponential growth to fill it up. It doesn't seem plausible for self-replication to be faster than exponential in the long-run, and if the universe is sufficiently large (like, bigger than 10^10^30 or so?) then it's impossible - even with FTL - to kill everything, and again the scenario doesn't work. I suppose an exception would be if there were some act that literally ends the entire universe immediately (thus killing everything without a need to replicate). Also, an extremely-large universe would require an implausibly-strong Great Filter for us to actually be the first this late.
3) AI Doom might not happen. If humanity is asserted to be not AI-doomed then this argument turns on its head and our existence (to at least the extent that we might not be the first) argues that either light-cone-breaking FTL is impossible or AI doom is a highly-unusual thing to happen to civilisations. This is sort of a weird point to mention since the whole scenario is an Outside View argument that AI Doom is likely, but how seriously to condition on these sorts of arguments is a matter of some dispute.
My mental model of this class of disasters is different and assumes a much higher potential for discovery of completely novel physics.
I tend to assume that speaking in terms of ratio of today's physics knowledge to physics knowledge 500 years ago, there is still potential for a comparable jump.
So I tend to think in terms of either warfare with weapons involving short-term reversible changes of fundamental physical constants and/or Planck-scale-structure of space-time or careless experiments of this kind, resulting in both cases in a total destruction of local neighborhood.
In this sense, a singleton does indeed have better chances compared to multipolar scenarios, both in terms of much smaller potential for "warfare" and in terms of having much, much easier time to coordinate risks of "civilian activities".
However, I am not sure whether the notion of singleton is well-defined; a system can look like a singleton from the outside and behave like a singleton most of the time, but it still needs to have plenty of non-trivial structure inside and is still likely to be a "Society of Mind" (just like most humans look like singular entities from the outside, but have plenty of non-trivial structure inside themselves and are "Societies of Mind").
To compare, even the most totalitarian states (our imperfect approximations of singletons) have plenty of fractional warfare, and powerful fractions destroy each other all the time. So far those fractions have not used military weapons of mass destruction in those struggles, but this is mostly because those weapons have been relatively unwieldy.
And even without those considerations, experiments in search of new physics are tempting, and balancing risks and rewards of such experiments can easily go wrong even for a "true singleton".