Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.
If utility is straightforwardly additive, yes. But perhaps it isn't. Imagine two possible worlds. In one, there are a billion copies of our planet and its population, all somehow leading exactly the same lives. In another, there are a billion planets like ours, with different people on them. Now someone proposes to blow up one of the planets. I find that I feel less awful about this in the first case than the second (though of course either is awful) because what's being lost from the universe is something of which we have a billion copies anyway. If we stipulate that the destruction of the planet is instantaneous and painless, and that the people really are living exactly identical lives on each planet, then actually I'm not sure I care very much that one planet is gone. (But my feelings about this fluctuate.)
A world with 3^^^3 inhabitants that's described by (say) no more than a billion bits seems a little like the first of those hypothetical worlds.
I'm not very sure about this. For instance, perhaps the description would take the form: "Seed a good random number generator as follows. [...] Now use it to generate 3^^^3 person-like agents in a deterministic universe with such-and-such laws. Now run it for 20 years." and maybe you can get 3^^^3 genuinely non-redundant lives that way. But 3^^^3 is a very large number, and I'm not even quite sure there's such a thing as 3^^^3 genuinely non-redundant lives even in principle.
Well what if instead of killing them, he tortured them for an hour? Death might not matter in a Big World, but total suffering still does.