Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.
What do you mean by "you still get a discontinuity at the bound"? (I am wondering whether by "bounded utility" you mean something like "unbounded utility followed by clipping at some fixed bounds", which would certainly introduce weird discontinuities but isn't at all what I have in mind when I imagine an agent with bounded utilities.)
I agree that doubting the mugger is a good idea, and in particular I think it's entirely reasonable to suppose that the probability that anyone can affect your utility by an amount U must decrease at least as fast as 1/U for large U, which is essentially (except that I was assuming a Solomonoff-like probability assignment) what I proposed on LW back in 2007.
Now, of course an agent's probability and utility assignments are whatever they are. Is there some reason other than wanting to avoid a Pascal's mugging why that condition should hold? Well, if it doesn't hold then your expected utility diverges, which seems fairly bad. -- Though I seem to recall seeing an argument from Stuart Armstrong or someone of the sort to the effect that if your utilities aren't bounded then your expected utility in some situations pretty much has to diverge anyway.
(We can't hope for a much stronger reason, I think. In particular, your utilities can be just about anything, so there's clearly no outright impossibility or inconsistency about having utilities that "increase too fast" relative to your probability assignments.)
What do you have in mind?