This is the weakest assumption in your chain of reasoning. Design space for UFAI is far bigger than for FAI,
Irrelevant. The design space of all programs is infinite - do you somehow think that the set of programs that humans create is a random sample from the set of all programs?. The size of the design space has absolutely nothing whatsoever to do with any realistic actual probability distribution over that space.
we can't make strong assumptions about what it is or is not motivated to do
Of course we can - because UFAI is defined as superintelligence that doesn't care about humans!
Of course we can - because UFAI is defined as superintelligence that doesn't care about humans!
For a certain narrow sense of "care", yes -- but it's a sense narrow enough that it doesn't exclude a motivation to sim humans, or give us any good grounds for probabilistic reasoning about whether a Friendly intelligence is more likely to simulate us. So narrow, in fact, that it's not actually a very strong assumption, if by strength we mean something like bits of specification.
At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.
EDIT: Thanks for all the contribution! Keep them coming...