Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.
Not really avoiding -- a bound on your utility in the context of a Pascal's Mugging is basically a bound on what the Mugger can offer you. For any probability of what the Mugger promises there is some non-zero amount that you would be willing to pay and that amount is a function of your bound (and of the probability, of course).
However utility asymptotically approaching a bound is likely to have its own set of problems. Here is a scenario after five seconds of thinking:
That vexatious chap Omega approaches you (again!) and this time instead of boxes offers you two buttons, let's say one of them is teal-coloured and the other is cyan-coloured. He says that if you press the teal button, 1,000,001 people will be cured of terminal cancer. But if you press the cyan button, 1,000,000 people will be cured of terminal cancer plus he'll give you a dollar. You consult your utility function, happily press the cyan button and walk away richer by a dollar. Did something go wrong?
Yes, something went wrong in your analysis.
I suggested mapping an unbounded utility function onto a finite interval. This preserves the order of the preferences in the unbounded utility function.
In my "unbounded" function, I prefer saving 1,000,001 people to saving 1,000,000 and getting a dollar. So I have the same preference with the bounded function, and so I press the teal button.