Epistemic status: very speculative
Content warning: if true this is pretty depressing
This came to me when thinking about Eliezer's note on Twitter that he didn't think superintelligence could do FTL, partially because of Fermi Paradox issues. I think Eliezer made a mistake, there; superintelligent AI with (light-cone-breaking, as opposed to within-light-cone-of-creation) FTL, if you game it out the whole way, actually mostly solves the Fermi Paradox.
I am, of course, aware that UFAI cannot be the Great Filter in a normal sense; the UFAI itself is a potentially-expanding technological civilisation.
But. If a UFAI is expanding at FTL, then it conquers and optimises the entire universe within a potentially-rather-short timeframe (even potentially a negative timeframe at long distances, if the only cosmic-censorship limit is closing a loop). That means the future becomes unobservable; no-one exists then (perhaps not even the AI, if it is not conscious or if it optimises its consciousness away after succeeding). Hence, by the anthropic principle, we should expect to be either the first or extremely close to it (and AIUI, frequency arguments like those in the Grabby Aliens paper suggest that "first in entire universe" should usually be significantly ahead of successors relative to time elapsed since Big Bang).
This is sort of an inverse version of Deadly Probes (which has been basically ruled out in the normal-Great-Filter sense, AIUI, by "if this is true we should be dead" concerns); we are, in this hypothesis, fated to release Deadly Probes that kill everything in the universe, which prevents observations except our own observations of nothing. It also resurrects the Doomsday Argument, as in this scenario there are never any sentient aliens anywhere or anywhen to drown out the doom signal; indeed, insofar as you believe it, the Doomsday Argument would appear to argue for this scenario being true.
Obvious holes in this:
1) FTL may be impossible, or limited to non-light-cone-breaking versions (e.g. wormholes that have to be towed at STL). Without light-cone-breaking FTL there are non-first species and non-Fermi-Paradox observations even with UFAI catastrophe being inevitable.
2) The universe might be too large for exponential growth to fill it up. It doesn't seem plausible for self-replication to be faster than exponential in the long-run, and if the universe is sufficiently large (like, bigger than 10^10^30 or so?) then it's impossible - even with FTL - to kill everything, and again the scenario doesn't work. I suppose an exception would be if there were some act that literally ends the entire universe immediately (thus killing everything without a need to replicate). Also, an extremely-large universe would require an implausibly-strong Great Filter for us to actually be the first this late.
3) AI Doom might not happen. If humanity is asserted to be not AI-doomed then this argument turns on its head and our existence (to at least the extent that we might not be the first) argues that either light-cone-breaking FTL is impossible or AI doom is a highly-unusual thing to happen to civilisations. This is sort of a weird point to mention since the whole scenario is an Outside View argument that AI Doom is likely, but how seriously to condition on these sorts of arguments is a matter of some dispute.
Your scenario does not depend on FTL.
However, its interaction with the Doomsday Argument is more complicated and potentially weaker (assuming you accept the Doomsday Argument at all). This is because P(we live in a Kardashev ~0.85 civilisation) depends strongly in this scenario on the per-civilisation P(Doom before Kardashev 2); if the latter is importantly different from 1 (even 0.9999), then the vast majority of people still live in K2 civilisations and us being in a Kardashev ~0.85 civilisation is still very unlikely (though less unlikely than it would be in No Doom scenarios where those K2+ civilisations last X trillion years and spread further).
I'm not sure how sane it is for me to be talking about P(P(Doom)), even in this sense (and frankly my entire original argument stinks of Lovecraft, so I'm not sure how sane I am in general), but in my estimation P(P(Doom before Kardashev 2) > 0.9999) < P(FTL is possible). AI would have to be really easy to invent and co-ordination to not build it would have to be fully impossible - whether Butlerian Jihad can work or not for RL humanity, it seems like it wouldn't need much difference in our risk curves for it to definitely happen, and while we have gotten to a point where we can build AI before we can build a Dyson Sphere, that doesn't seem like it's a necessary path. I can buy that P(AI Doom before Kardashev 3) could be extremely high in no-FTL worlds - that'd only require that alignment is impossible, since reaching Kardashev 3 STL takes millennia and co-ordination among chaotic beings is very hard at interstellar scales in a way it's not within a star system. But assured doom before K2 seems very weird. And FTL doesn't seem that unlikely to me; time travel is P = ϵ since we don't see time travellers, but I know one proposed mechanism (quantum vacuum misbehaving upon creation of a CTC system) that might ban time travel specifically and thus break the "FTL implies time travel" implication.
It also gets weird when you start talking about the chance that a given observer will observe the Fermi Paradox or not; my intuitions might be failing me, but it seems like a lot, possibly most, of the people in the "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" world would see aliens (due to K2 civilisations being able to be seen from further, and see much further - an Oort Cloud interferometer could detect 2000BC humanity anywhere in the Local Group via the Pyramids and land-use patterns, and detect 2000AD humanity even further via anomalous night-time illumination).
Note also that among "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" worlds, there's not much Outside View evidence that P(Human doom before K2) is high as opposed to low; P(random observer is Us) is not substantially affected by whether there are N or N+1 K2 civilisations the way it is by whether there are 0 or 1 such civilisations (this is what I was talking about with aliens breaking the conventional Doomsday Argument). So this would be substantially more optimistic than my proposal; the "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" scenario means we get wiped out eventually, but we (and aliens) could still have astronomically-positive utility before then, as opposed to being Doomed Right Now (though we could still be Doomed Right Now for Inside View reasons).