Firstly, if you are prepared to look at utilitarian style, look how big this number is arguments, then X-risk reduction comes out on top.
The field that this is pointing to is how to handle utility uncertainty. Suppose you have several utility functions, and you don't yet know which you want to maximise, but you might find relevant info in the future. You can act to maximise expected utility. The problem is that if there are many utility functions, then some of them might control your behaviour despite having tiny probability by outputting absurdly huge numbers. This is pascals mugging, and various ideas have been proposed to avoid it. Some include rescaling the utility functions in various ways, or acting according to a weighted vote of the utility functions.
There is also a question of how much moral uncertainty to regard ourselves as having. Our definitions of what we do and don't care about exist in our mind. It is a consistent position to decide that you definitely don't care about insects, and any event that makes future-you care about insects is unwanted brain damage. Moral theories like utilitarianism are at least partly predictive theories. If you came up with a simple (Low komelgorov complexity) moral theory that reliably predicted humans moral judgements, that would be a good theory. However, humans also have logical uncertainty, and suspect that our utility function is of low "human perceived complexity". So given a moral theory of low "human perceived complexity" which agrees with our intuitions on 99% of cases, we may change our judgement on the remaining 1%. (Perform a Bayesian update under utility uncertainty with the belief that our intuitions are usually but not always correct.)
So we can argue that utilitarianism usually matches our intuitions, so is probably the correct moral theory, so we should trust it even in the case of insects where it disagrees. However, you have to draw the line between care and don't care somewhere, and the version of utilitarianism that draws the line round mammals or humans doesn't seem any more complicated. And it matches our intuitions better. So its probably correct.
If you don't penalise unlikely possible utilities for producing huge utilities, you get utility functions in which you care about all quantum wave states dominating your actions. (Sure, you assign a tiny probability to it, but there are a lot of quantum wave states.) If you penalise strongly, or use voting utilities or bounded utilities then you get behaviour that doesn't care about insects. If you go up a meta level, and say you don't know how much to penalize, standard uncertainty treatment gets you back to no penalty, quantum wave state dominated actions.
I have a sufficiently large amount of uncertainty to say "In practice, it usually all adds up to normality. Don't go acting on weird conclusions you don't understand that are probably erroneous."
At the same time, it’s more than plausible that the extinction of humans would be very bad for insects, because their habitats would grow significantly without humans.
But anyway, I agree that even if insect suffering is really massive, it doesn’t swamp x-risk consideration. (Personally, I don’t think insect suffering matters much at all, though that’s really more of an instinct on “torture vs. dust specks” in general, though it does confuse me as an issue.) I’m just wondering how important it is in the scale of things.
Thanks for the response though! I appreciate it.