I see the key flaw in that the more exceptional the promise is, the lower the probability you must assign to it.
According to common LessWrong ideas, lowering the probability based on the exceptionality of the promise would mean lowering it based on the Kolomogorov complexity of the promise.
If you do that, you won't lower the probability enough to defeat the mugging.
If you can lower the probability more than that, of course you can defeat the mugging.
If you do that, you won't lower the probability enough to defeat the mugging.
If you do that, your decision system just breaks down, since the expectation over arbitrary integers with probabilities computer by Solomonoff induction is undefined. That's the reason why AIXI uses bounded rewards.
Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.