Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.
That's fine. You can just follow your intuition, and that usually won't lead you too wrong. Usually. However the issue here is programming an AI which doesn't share our intuitions. We need to actually formalize our intuitions to get it to behave as we would.
What criterion do you use to rule out solutions?