Maybe the origin of the paradox is that we are extending the principle of maximizing expected return beyond its domain of applicability. Unlike Bayes formula, which is an unassailable theorem, the principle of maximizing expected return is perhaps just a model of rational desire. As such it could be wrong. When dealing with reasonably high probabilities, the model seems intuitively right. With small probabilities it seems to be just an abstraction, and there is not much intuition to compare it to. When considering a game with positive expected return that ...
G,
I was essentially agreeing with you that killing 3^^^^^3 vs 3^^^^3 puppies may not be ethically distinct. I would call this scope insensitivity. My suggestion was that scope insensitivity is not necessarily always unjustified.
Robin's anthropic argument seems pretty compelling in this example, now that I understand it. It seems a little less clear if the Matrix-claimant tried to mug you with a threat not involving many minds. For example, maybe he could claim that there exists some giant mind, the killing of which would be as ethically significant as the killing of 3^^^^3 individual human minds? Maybe in that case you would anthropically expect with overwhelmingly high probability to be a figment inside the giant mind.