Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.
How does that work? VNM preferences are basically ordering or ranking. What kind of VNM preferences would be disallowed under a bounded utility function?
Are you saying that you can/should set the bounds narrowly? You lose your ability to correctly react to rare events, then -- and black swans are VERY influential.
Only in the deterministic case. If you have uncertainty, this doesn't apply anymore: utility is invariant to positive affine transforms, not to arbitrary monotone transforms.
Any risk-neutral (or risk-seeking) preference in any quantity.