I don't think that "the justification for expected utility is that as the number of bets you take approach infinity, it becomes the optimal strategy," is quite true. Kaj did say something similar to this, but that seems to me a problem with his approach.
Basically, expected utility is supposed to give a mathematical formalization to people's preferences. But consider this fact: in itself, it does not have any particular sense to say that "I like vanilla ice cream twice as much as chocolate." It makes sense to say I like it more than chocolate. This means if I am given a choice between vanilla and chocolate, I will choose vanilla. But what on earth does it mean to say that I like it "twice" as much as chocolate? In itself, nothing. We have to define this in order to construct a mathematical analysis of our preferences.
In practice we make this definition by saying that I like vanilla so much that I am indifferent to having chocolate, or to having a 50% chance of having vanilla, and a 50% chance of having nothing.
Perhaps I justify this by saying that it will get me a certain amount of vanilla in my life. But perhaps I don't - the definition does not justify the preference, it simply says what it means. This means that in order to say I like it twice as much, I have to say that I am indifferent to the 50% bet and to the certain chocolate, no matter what the justification for this might or might not be. If I change my preference when the number of cases goes down, then it will not be mathematically consistent to say that I like it twice as much as chocolate, unless we change the definition of "like it twice as much."
Basically I think you are mixing up things like "lives", which can be mathematically quantified in themselves, more or less, and people's preferences, which only have a quantity if we define one.
It may be possible for Kaj to come up with a new definition for the amount of someone's preference, but I suspect that it will result in a situation basically the same as keeping our definition, but admitting that people have only a limited amount of preference for things. In other words, they might prefer saving 100,000 lives to saving 10,000 lives, but they certainly do not prefer it 10 times as much, meaning they will not always accept the 100,000 lives saved at a 10% chance, compared to a 100% chance of saving 10,000.
But what on earth does it mean to say that I like it "twice" as much as chocolate?
Obviously it means you would be willing to trade 2 units of chocolate ice cream for 1 unit of vanilla. And over the course of your life, you would prefer to have more vanilla ice cream than chocolate ice cream. Perhaps before you die, you will add up all the ice creams you've ever eaten. And you would prefer for that number to be higher rather than lower.
Nowhere in the above description did I talk about probability. And the utility function is already completely ...
Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.