A lot's been said about Pascal's Mugging, counter-muggings, adding extra arrows to the hyper-power stack, and so forth; but if anyone's said anything about my own reaction, I've yet to read it; and, in case it might spur some useful discussion, I'll try explaining it.
Over the years, I've worked out a rough rule-of-thumb to figure out useful answers to most everyday sorts of ethical quandaries, one which seems to do at least as well as any other I've seen: I call it the "I'm a selfish bastard" rule, though my present formulation of it continues with the clauses, "but I'm a smart selfish bastard, interested in my long-term self-interest." This seems to give enough guidance to cover anything from "should I steal this or not?" to "is it worth respecting other peoples' rights in order to maximize the odds that my rights will be respected?" to "exactly whose rights should I respect, anyway?". From this latter question, I ended up with a 'Trader's Definition' for personhood: if some other entity can make a choice about whether or not to make an exchange with me of a banana for a backrub, or playtime for programming, or anything of the sort, then at least generally, it's in my own self-interest to treat them as if they were a person, whether or not they match any other criteria for personhood.
Which brings us to Pascal's Mugging itself: "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."
To put it bluntly... why should I care? Even if the Mugger's claim is accurate, the entities he threatens to simulate and calls 'people' don't seem as if they will have any opportunity to interact with my portion of the Matrix; they will never have any opportunity to offer me any benefit or any harm. How would it benefit myself to treat such entities as if they not only had a right to life, but that I had an obligation to try to defend their right?
For an alternate approach: Most forms of ethics that have been built, have been created with an unstated assumption about the number of possible beings someone can interact with in their life - on the outside, someone who lived 120 years and met with a new person every second would meet less than 4 billion individuals. It's only in the past few centuries that hard, physical experience has given us sufficient insights into some of the rather more basic foundations of economics for humanity to have even developed enough competing theories of ethics for some to start being winnowed out; and we're still a long way away from getting a broad consensus on an ethical theory that can deal with the existence of a mere 10 billion individuals. What are the odds that we possess sufficient information to have any inkling of the assumptions required to deal with a mere 3^^^3 individuals existing? What knowledge would be required so that when the answer is figured out, we would actually be able to tell that that was it?
For another alternate approach: Assuming that the Mugger is telling the truth... is what he threatens to do actually a bad thing? He doesn't say anything about the nature of the lives the people he simulates; one approach he might take could be to simulate a large number of copies of the universe, which eventually peters out in heat-death; would the potential inhabitants of such a simulated universe really object to their existence being created in the first place?
For yet another alternate approach: "You have outside-the-Matrix powers capable of simulating 3^^^^3 people? Whoah, that implies so much about the nature of reality that this particular bet is nearly meaningless. Which answer could I give that would induce you to tell me more about the true substrate of existence?"
Do any of these seem to be a worthwhile way of looking at PM? Do you have any out-of-left-field approaches of your own?
"Give me five dollars, and I will use my outside-the-Matrix powers to make your wildest dreams come true, including living for 3^^^3 years of eudaimonic existence and, yes, even telling you about the true substrate of existence. Hey, I'll top it off and let you out of the box, if only you decide to give me five of your simulated dollars."
For your kinds of arguments to work, it seems that there must be nothing that the mugger could possibly promise or threaten to do and that, if it came true, you would rate as making a difference of 3^^^3 utils (where declining the offer and continuing your normal life is 0 utils, and giving five dollars to a jokester is -5 utils). It seems like a minor variation on the arguments in Eliezer's original post to say that if your utility function does assign utilities differing by 3^^^3 to some scenarios, then it seems extremely unlikely that the probability of each of these coming true will balance out just so that the expected utility of paying the mugger is always smaller than zero, no matter what the mugger promises or threatens. If your utility function doesn't assign utilities so great or small to any outcome, then you have a bounded utility function, which is one of the standard answers to the Mugging.
My own current position is that perhaps I really do have a bounded utility function. If it were only the mugging, I would perhaps still hold out more hope for a satisfactory solution that doesn't involve bounded utility, but there's also the unpalatability of having to prefer a (googolplex-1)/googolplex chance of everyone being tortured for a thousand years and all life in the multiverse ending after that + a 1/googolplex chance of 4^^^^4 years of positive post-singularity humanity to the certainty of a mere 3^^^3 years of positive post-singularity humanity, given that 3^^^3 years is far more than enough to cycle through every possible configuration of a present-day human brain. Yes, having more space to expand into post-singularity is always better than less, but is it really that much better?
(ObNote: In order not too make this seem one-sided, I should also mention the standard counter to that, especially since it was a real eye-opener the first time I read Eliezer explain it -- namely, with G := googolplex, I would then also have to accept that I'd prefer a 1/G chance of living (G + 1) years + a (G-1)/G chance of living 3^^^3 years to a 1/G chance of living G years + a (G-1)/G chance of living 4^^^^4 years -- in other words, I'll prefer a near-certainty of an unimaginably smaller existence, if I get for that a miniscule increase of existence in a scenario that only has a miniscule chance of happening in the first place. But I've started to think that, perhaps, the unimaginably large difference between these lifetimes possibly really might be that unimportant, given that I can cycle through all of current-brain-size human mindspace many times in a mere 3^^^3 years, and given the also-unpalatable conclusions from the unbounded utility function.)