The only counterarguments I can think of would be:
The claim that the likelihood of s-risks being close to that of x-risks seems not well argued to me. In particular, conflict seems to be the most plausible scenario (and one which has a high prior placed on it as we can observe that much suffering today is caused by conflict), but it seems to be less and less likely of a scenario once you factor in superintelligence, as multi-polar scenarios seem to be either very short-lived or unlikely to happen at all.
We should be wary of applying anthropomorphic traits to hypothetical artificial agents in the future. Pain in biological organisms may very well have evolved as a proxy to negative utility, and might not be necessary in "pure" agent intelligences which can calculate utility functions directly. It's not obvious to me that implementing suffering in the sense that humans understand it would be cheaper or more efficient for a superintelligence to do instead of simply creating utility-maximizers when it needs to produce a large number of sub-agents.
High overlap between approaches to mitigating x-risk and approaches to mitigating s-risks. If the best chance of mitigating future suffering is trying to bring about a friendly artificial intelligence explosion, then it seems that the approaches we are currently taking should still be the correct ones.
More speculatively: If we focus heavily on s-risks, does this open us up to issues regarding utility-monsters? Can I extort people by creating a simulation of trillions of agents and then threaten to minimize their utility? (If we simply value the sum of utility, and not necessarily the complexity of the agent having the utility, then this should be relatively cheap to implement).
I think the most general response to your first three points would look something like this: Any superintelligence that achieves human values will be adjacent in design space to many superintelligences that cause massive suffering, so it's quite likely that the wrong superintelligence will win, due to human error, malice, or arms races.
As to your last point, it looks more like a research problem than a counterargument, and I'd be very interested in any progress on that front :-)